hadoop git commit: Configurable timeout between YARNRunner terminate the application and forcefully kill. Contributed by Eric Payne. (cherry picked from commit d39bc903a0069a740744bafe10e506e452ed7018

2015-03-10 Thread junping_du
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2f902a823 - dbcdcb0d3


Configurable timeout between YARNRunner terminate the application and 
forcefully kill. Contributed by Eric Payne.
(cherry picked from commit d39bc903a0069a740744bafe10e506e452ed7018)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dbcdcb0d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dbcdcb0d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dbcdcb0d

Branch: refs/heads/branch-2
Commit: dbcdcb0d3ccc67db12104137d31cfc01cf6825ce
Parents: 2f902a8
Author: Junping Du junping...@apache.org
Authored: Tue Mar 10 06:21:59 2015 -0700
Committer: Junping Du junping...@apache.org
Committed: Tue Mar 10 06:23:24 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +++
 .../apache/hadoop/mapreduce/MRJobConfig.java|  5 
 .../src/main/resources/mapred-default.xml   |  8 ++
 .../org/apache/hadoop/mapred/YARNRunner.java|  5 +++-
 .../apache/hadoop/mapred/TestYARNRunner.java| 26 
 5 files changed, 46 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbcdcb0d/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 6242363..e4005eb 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -90,6 +90,9 @@ Release 2.7.0 - UNRELEASED
 MAPREDUCE-6267. Refactor JobSubmitter#copyAndConfigureFiles into it's own 
 class. (Chris Trezzo via kasha)
 
+MAPREDUCE-6263. Configurable timeout between YARNRunner terminate the 
+application and forcefully kill. (Eric Payne via junping_du)
+
   OPTIMIZATIONS
 
 MAPREDUCE-6169. MergeQueue should release reference to the current item 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbcdcb0d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
index 28a6e13..ce2b17c 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
@@ -633,6 +633,11 @@ public interface MRJobConfig {
   public static final int 
DEFAULT_MR_AM_HISTORY_USE_BATCHED_FLUSH_QUEUE_SIZE_THRESHOLD =
   50;
 
+  public static final String MR_AM_HARD_KILL_TIMEOUT_MS =
+  MR_AM_PREFIX + hard-kill-timeout-ms;
+  public static final long DEFAULT_MR_AM_HARD_KILL_TIMEOUT_MS =
+  10 * 1000l;
+
   /**
* The threshold in terms of seconds after which an unsatisfied mapper 
request
* triggers reducer preemption to free space. Default 0 implies that the 
reduces

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbcdcb0d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
index da82dc2..72c4c5f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
@@ -1783,6 +1783,14 @@
 /property
 
 property
+  nameyarn.app.mapreduce.am.hard-kill-timeout-ms/name
+  value1/value
+  description
+ Number of milliseconds to wait before the job client kills the 
application.
+  /description
+/property
+
+property
   descriptionCLASSPATH for MR applications. A comma-separated list
   of CLASSPATH entries. If mapreduce.application.framework is set then this
   must specify the appropriate classpath for that archive, and the name of

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbcdcb0d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java

hadoop git commit: HDFS-7830. DataNode does not release the volume lock when adding a volume fails. (Lei Xu via Colin P. McCabe)

2015-03-10 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f1b32d143 - eefca23e8


HDFS-7830. DataNode does not release the volume lock when adding a volume 
fails. (Lei Xu via Colin P. McCabe)

(cherry picked from commit 5c1036d598051cf6af595740f1ab82092b0b6554)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eefca23e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eefca23e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eefca23e

Branch: refs/heads/branch-2
Commit: eefca23e8c5e474de1e25bf2ec8a5b266bbe8cfe
Parents: f1b32d1
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Mar 10 18:20:25 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Tue Mar 10 19:45:18 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hadoop/hdfs/server/common/Storage.java  |  2 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 16 +++-
 .../datanode/TestDataNodeHotSwapVolumes.java| 34 ++--
 .../fsdataset/impl/FsDatasetTestUtil.java   | 43 
 .../fsdataset/impl/TestFsDatasetImpl.java   | 43 
 6 files changed, 108 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eefca23e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9c2f979..90d470c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -827,6 +827,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7818. OffsetParam should return the default value instead of throwing
 NPE when the value is unspecified. (Eric Payne via wheat9)
 
+HDFS-7830. DataNode does not release the volume lock when adding a volume
+fails. (Lei Xu via Colin P. Mccabe)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eefca23e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 6756636..d617fc5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -672,7 +672,7 @@ public abstract class Storage extends StorageInfo {
  */
 public void lock() throws IOException {
   if (isShared()) {
-LOG.info(Locking is disabled);
+LOG.info(Locking is disabled for  + this.root);
 return;
   }
   FileLock newLock = tryLock();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eefca23e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 9376acc..3fbeaa7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -47,6 +47,7 @@ import javax.management.NotCompliantMBeanException;
 import javax.management.ObjectName;
 import javax.management.StandardMBean;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -377,6 +378,12 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 LOG.info(Added volume -  + dir + , StorageType:  + storageType);
   }
 
+  @VisibleForTesting
+  public FsVolumeImpl createFsVolume(String storageUuid, File currentDir,
+  StorageType storageType) throws IOException {
+return new FsVolumeImpl(this, storageUuid, currentDir, conf, storageType);
+  }
+
   @Override
   public void addVolume(final StorageLocation location,
   final ListNamespaceInfo nsInfos)
@@ -396,8 +403,8 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 final 

hadoop git commit: YARN-3187. Documentation of Capacity Scheduler Queue mapping based on user or group. Contributed by Gururaj Shetty

2015-03-10 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8d5b01e00 - a380643d2


YARN-3187. Documentation of Capacity Scheduler Queue mapping based on user or 
group. Contributed by Gururaj Shetty


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a380643d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a380643d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a380643d

Branch: refs/heads/trunk
Commit: a380643d2044a4974e379965f65066df2055d003
Parents: 8d5b01e
Author: Jian He jia...@apache.org
Authored: Tue Mar 10 10:54:08 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 10 11:15:57 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../src/site/markdown/CapacityScheduler.md  | 26 
 2 files changed, 29 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a380643d/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a6dcb29..82134db 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -381,6 +381,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3296. Mark ResourceCalculatorProcessTree class as Public for 
configurable
 resource monitoring. (Hitesh Shah via junping_du)
 
+YARN-3187. Documentation of Capacity Scheduler Queue mapping based on user
+or group. (Gururaj Shetty via jianhe)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a380643d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
index 3c32cdd..1cb963e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
@@ -69,6 +69,8 @@ The `CapacityScheduler` supports the following features:
 
 * **Resource-based Scheduling** - Support for resource-intensive applications, 
where-in a application can optionally specify higher resource-requirements than 
the default, there-by accomodating applications with differing resource 
requirements. Currently, *memory* is the the resource requirement supported.
 
+* **Queue Mapping based on User or Group** - This feature allows users to map 
a job to a specific queue based on the user or group.
+
 Configuration
 -
 
@@ -151,6 +153,30 @@ Configuration
 
 **Note:** An *ACL* is of the form *user1*, *user2spacegroup1*, *group2*. The 
special value of * implies *anyone*. The special value of *space* implies *no 
one*. The default is * for the root queue if not specified.
 
+  * Queue Mapping based on User or Group
+
+  The `CapacityScheduler` supports the following parameters to configure the 
queue mapping based on user or group:
+
+| Property | Description |
+|: |: |
+| `yarn.scheduler.capacity.queue-mappings` | This configuration specifies the 
mapping of user or group to aspecific queue. You can map a single user or a 
list of users to queues. Syntax: `[u or 
g]:[name]:[queue_name][,next_mapping]*`. Here, *u or g* indicates whether the 
mapping is for a user or group. The value is *u* for user and *g* for group. 
*name* indicates the user name or group name. To specify the user who has 
submitted the application, %user can be used. *queue_name* indicates the queue 
name for which the application has to be mapped. To specify queue name same as 
user name, *%user* can be used. To specify queue name same as the name of the 
primary group for which the user belongs to, *%primary_group* can be used.|
+| `yarn.scheduler.capacity.queue-mappings-override.enable` | This function is 
used to specify whether the user specified queues can be overridden. This is a 
Boolean value and the default value is *false*. |
+
+Example:
+
+```
+ property
+   nameyarn.scheduler.capacity.queue-mappings/name
+   
valueu:user1:queue1,g:group1:queue2,u:%user:%user,u:user2:%primary_group/value
+   description
+ Here, user1 is mapped to queue1, group1 is mapped to queue2, 
+ maps users to queues with the same name as user, user2 is mapped 
+ to queue name same as primary group respectively. The mappings will be 
+ evaluated from left to right, and the first valid mapping will be used.
+   /description
+ /property
+```
+
 ###Other Properties
 
   * Resource Calculator



hadoop git commit: Revert HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support FreeBSD and Solaris in addition to Linux (Kiran Kumar M R via Colin P. McCabe)

2015-03-10 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/trunk 20b8ee135 - 8d5b01e00


Revert HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support 
FreeBSD and Solaris in addition to Linux (Kiran Kumar M R via Colin P.  McCabe)

This reverts commit 3241fc2b17f11e621d8ffb6160caa4b850c278b6.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8d5b01e0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8d5b01e0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8d5b01e0

Branch: refs/heads/trunk
Commit: 8d5b01e005b2647262861acf522e9b5c6d6f8bba
Parents: 20b8ee1
Author: cnauroth cnaur...@apache.org
Authored: Tue Mar 10 11:02:07 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Tue Mar 10 11:02:07 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  4 
 .../hadoop/crypto/random/OpensslSecureRandom.c  | 16 +---
 2 files changed, 1 insertion(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d5b01e0/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7d0cbee..ab58270 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -680,10 +680,6 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11642. Upgrade azure sdk version from 0.6.0 to 2.0.0.
 (Shashank Khandelwal and Ivan Mitic via cnauroth)
 
-HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support
-FreeBSD and Solaris in addition to Linux (Kiran Kumar M R via Colin P.
-McCabe)
-
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d5b01e0/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
index f30ccbe..6c31d10 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
@@ -29,10 +29,6 @@
 #include sys/types.h
 #endif
 
-#if defined(__FreeBSD__)
-#include pthread_np.h
-#endif
-
 #ifdef WINDOWS
 #include windows.h
 #endif
@@ -278,17 +274,7 @@ static void pthreads_locking_callback(int mode, int type, 
char *file, int line)
 
 static unsigned long pthreads_thread_id(void)
 {
-  unsigned long thread_id = 0;
-#if defined(__linux__)
-  thread_id = (unsigned long)syscall(SYS_gettid);
-#elif defined(__FreeBSD__)
-  thread_id = (unsigned long)pthread_getthreadid_np();
-#elif defined(__sun)
-  thread_id = (unsigned long)pthread_self();
-#else
-#error Platform not supported
-#endif
-  return thread_id;
+  return (unsigned long)syscall(SYS_gettid);
 }
 
 #endif /* UNIX */



hadoop git commit: HDFS-7898. Change TestAppendSnapshotTruncate to fail-fast. Contributed by Tsz Wo Nicholas Sze.

2015-03-10 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 9d1f67f2f - 368ab2cd3


HDFS-7898. Change TestAppendSnapshotTruncate to fail-fast. Contributed by Tsz 
Wo Nicholas Sze.

(cherry picked from commit e43882e84ae44301eabd0122b5e5492da5fe9f66)
(cherry picked from commit c7105fcff0ac65c5f85d7cc8ca7c24b984217c2c)

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/368ab2cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/368ab2cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/368ab2cd

Branch: refs/heads/branch-2.7
Commit: 368ab2cd37408230cc4866f04e2d3b077bb2385c
Parents: 9d1f67f
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 9 10:52:17 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Tue Mar 10 10:31:41 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hadoop/hdfs/TestAppendSnapshotTruncate.java | 61 +---
 .../hdfs/server/namenode/TestFileTruncate.java  | 11 +++-
 3 files changed, 51 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/368ab2cd/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index fb3fefa..6fafff7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -417,6 +417,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7411. Change decommission logic to throttle by blocks rather than
 nodes in each interval. (Andrew Wang via cdouglas)
 
+HDFS-7898. Change TestAppendSnapshotTruncate to fail-fast.
+(Tsz Wo Nicholas Sze via jing9)
+
 HDFS-6806. HDFS Rolling upgrade document should mention the versions
 available. (J.Andreina via aajisaka)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/368ab2cd/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
index 5c4c7b4..e80e14f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
@@ -41,10 +41,6 @@ import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.DFSUtil;
-import org.apache.hadoop.hdfs.DistributedFileSystem;
-import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.TestFileTruncate;
 import org.apache.hadoop.test.GenericTestUtils;
@@ -69,6 +65,9 @@ public class TestAppendSnapshotTruncate {
   private static final int BLOCK_SIZE = 1024;
   private static final int DATANODE_NUM = 3;
   private static final short REPLICATION = 3;
+  private static final int FILE_WORKER_NUM = 3;
+  private static final long TEST_TIME_SECOND = 10;
+  private static final long TEST_TIMEOUT_SECOND = TEST_TIME_SECOND + 60;
 
   static final int SHORT_HEARTBEAT = 1;
   static final String[] EMPTY_STRINGS = {};
@@ -106,7 +105,7 @@ public class TestAppendSnapshotTruncate {
 
 
   /** Test randomly mixing append, snapshot and truncate operations. */
-  @Test
+  @Test(timeout=TEST_TIMEOUT_SECOND*1000)
   public void testAST() throws Exception {
 final String dirPathString = /dir;
 final Path dir = new Path(dirPathString);
@@ -121,12 +120,12 @@ public class TestAppendSnapshotTruncate {
 }
 localDir.mkdirs();
 
-final DirWorker w = new DirWorker(dir, localDir, 3);
+final DirWorker w = new DirWorker(dir, localDir, FILE_WORKER_NUM);
 w.startAllFiles();
 w.start();
-Worker.sleep(10L*1000);
+Worker.sleep(TEST_TIME_SECOND * 1000);
 w.stop();
-w.stoptAllFiles();
+w.stopAllFiles();
 w.checkEverything();
   }
 
@@ -259,7 +258,7 @@ public class TestAppendSnapshotTruncate {
   }
 }
 
-void stoptAllFiles() throws InterruptedException {
+void stopAllFiles() throws InterruptedException {
   for(FileWorker f : files) { 
 f.stop();
   }
@@ -269,12 +268,12 @@ public class TestAppendSnapshotTruncate {
   LOG.info(checkEverything);
   for(FileWorker f : files) { 
 f.checkFullFile();
-

hadoop git commit: HDFS-7830. DataNode does not release the volume lock when adding a volume fails. (Lei Xu via Colin P. McCabe)

2015-03-10 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/trunk a5cf985bf - 5c1036d59


HDFS-7830. DataNode does not release the volume lock when adding a volume 
fails. (Lei Xu via Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5c1036d5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5c1036d5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5c1036d5

Branch: refs/heads/trunk
Commit: 5c1036d598051cf6af595740f1ab82092b0b6554
Parents: a5cf985
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Mar 10 18:20:25 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Tue Mar 10 18:20:58 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hadoop/hdfs/server/common/Storage.java  |  2 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 16 +++-
 .../datanode/TestDataNodeHotSwapVolumes.java| 34 ++--
 .../fsdataset/impl/FsDatasetTestUtil.java   | 43 
 .../fsdataset/impl/TestFsDatasetImpl.java   | 43 
 6 files changed, 108 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c1036d5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index a2e552a..0ba34a4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1133,6 +1133,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7818. OffsetParam should return the default value instead of throwing
 NPE when the value is unspecified. (Eric Payne via wheat9)
 
+HDFS-7830. DataNode does not release the volume lock when adding a volume
+fails. (Lei Xu via Colin P. Mccabe)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c1036d5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index e6bd5b2..e6f0999 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -672,7 +672,7 @@ public abstract class Storage extends StorageInfo {
  */
 public void lock() throws IOException {
   if (isShared()) {
-LOG.info(Locking is disabled);
+LOG.info(Locking is disabled for  + this.root);
 return;
   }
   FileLock newLock = tryLock();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c1036d5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 58f5615..0f28aa4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -46,6 +46,7 @@ import javax.management.NotCompliantMBeanException;
 import javax.management.ObjectName;
 import javax.management.StandardMBean;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -375,6 +376,12 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 LOG.info(Added volume -  + dir + , StorageType:  + storageType);
   }
 
+  @VisibleForTesting
+  public FsVolumeImpl createFsVolume(String storageUuid, File currentDir,
+  StorageType storageType) throws IOException {
+return new FsVolumeImpl(this, storageUuid, currentDir, conf, storageType);
+  }
+
   @Override
   public void addVolume(final StorageLocation location,
   final ListNamespaceInfo nsInfos)
@@ -394,8 +401,8 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 final Storage.StorageDirectory sd = builder.getStorageDirectory();
 
 StorageType storageType = 

hadoop git commit: DelegateToFileSystem erroneously uses default FS's port in constructor. (Brahma Reddy Battula via gera)

2015-03-10 Thread gera
Repository: hadoop
Updated Branches:
  refs/heads/trunk aa92b764a - 64eb068ee


DelegateToFileSystem erroneously uses default FS's port in constructor. (Brahma 
Reddy Battula via gera)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/64eb068e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/64eb068e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/64eb068e

Branch: refs/heads/trunk
Commit: 64eb068ee8863da41df8db44bde1a9033198983d
Parents: aa92b76
Author: Gera Shegalov g...@apache.org
Authored: Tue Mar 10 13:52:06 2015 -0700
Committer: Gera Shegalov g...@apache.org
Committed: Tue Mar 10 13:52:06 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../apache/hadoop/fs/DelegateToFileSystem.java  |  3 +-
 .../src/main/resources/core-default.xml |  6 +++
 .../hadoop/fs/TestDelegateToFileSystem.java | 52 
 4 files changed, 62 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/64eb068e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index ab58270..3e42aa8 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1082,6 +1082,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11692. Improve authentication failure WARN message to avoid user
 confusion. (Yongjun Zhang)
 
+HADOOP-11618. DelegateToFileSystem erroneously uses default FS's port in
+constructor. (Brahma Reddy Battula via gera)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64eb068e/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
index 09707c6..6b7f387 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
@@ -29,7 +29,6 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Options.ChecksumOpt;
-import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.Progressable;
@@ -47,7 +46,7 @@ public abstract class DelegateToFileSystem extends 
AbstractFileSystem {
   Configuration conf, String supportedScheme, boolean authorityRequired)
   throws IOException, URISyntaxException {
 super(theUri, supportedScheme, authorityRequired, 
-FileSystem.getDefaultUri(conf).getPort());
+theFsImpl.getDefaultPort());
 fsImpl = theFsImpl;
 fsImpl.initialize(theUri, conf);
 fsImpl.statistics = getStatistics();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64eb068e/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 1d531df..46eae0a 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -581,6 +581,12 @@ for ldap providers in the same way as above does.
 /property
 
 property
+  namefs.AbstractFileSystem.ftp.impl/name
+  valueorg.apache.hadoop.fs.ftp.FtpFs/value
+  descriptionThe FileSystem for Ftp: uris./description
+/property
+
+property
   namefs.ftp.host/name
   value0.0.0.0/value
   descriptionFTP filesystem connects to this server/description

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64eb068e/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegateToFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegateToFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegateToFileSystem.java
new file mode 100644
index 000..5de3286
--- /dev/null
+++ 

hadoop git commit: DelegateToFileSystem erroneously uses default FS's port in constructor. (Brahma Reddy Battula via gera)

2015-03-10 Thread gera
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6802e8fef - f1b32d143


DelegateToFileSystem erroneously uses default FS's port in constructor. (Brahma 
Reddy Battula via gera)

(cherry picked from commit 64eb068ee8863da41df8db44bde1a9033198983d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f1b32d14
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f1b32d14
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f1b32d14

Branch: refs/heads/branch-2
Commit: f1b32d1436ba62e8bcea39bb33056d431a37673b
Parents: 6802e8f
Author: Gera Shegalov g...@apache.org
Authored: Tue Mar 10 13:52:06 2015 -0700
Committer: Gera Shegalov g...@apache.org
Committed: Tue Mar 10 14:10:39 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../apache/hadoop/fs/DelegateToFileSystem.java  |  3 +-
 .../src/main/resources/core-default.xml |  6 +++
 .../hadoop/fs/TestDelegateToFileSystem.java | 52 
 4 files changed, 62 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1b32d14/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index cfbe92f..16665f5 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -668,6 +668,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11692. Improve authentication failure WARN message to avoid user
 confusion. (Yongjun Zhang)
 
+HADOOP-11618. DelegateToFileSystem erroneously uses default FS's port in
+constructor. (Brahma Reddy Battula via gera)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1b32d14/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
index 09707c6..6b7f387 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
@@ -29,7 +29,6 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Options.ChecksumOpt;
-import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.Progressable;
@@ -47,7 +46,7 @@ public abstract class DelegateToFileSystem extends 
AbstractFileSystem {
   Configuration conf, String supportedScheme, boolean authorityRequired)
   throws IOException, URISyntaxException {
 super(theUri, supportedScheme, authorityRequired, 
-FileSystem.getDefaultUri(conf).getPort());
+theFsImpl.getDefaultPort());
 fsImpl = theFsImpl;
 fsImpl.initialize(theUri, conf);
 fsImpl.statistics = getStatistics();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1b32d14/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 76b3e2f..fedbc22 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -573,6 +573,12 @@ for ldap providers in the same way as above does.
 /property
 
 property
+  namefs.AbstractFileSystem.ftp.impl/name
+  valueorg.apache.hadoop.fs.ftp.FtpFs/value
+  descriptionThe FileSystem for Ftp: uris./description
+/property
+
+property
   namefs.ftp.host/name
   value0.0.0.0/value
   descriptionFTP filesystem connects to this server/description

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1b32d14/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegateToFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegateToFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegateToFileSystem.java
new file mode 100644
index 000..5de3286
--- /dev/null
+++ 

hadoop git commit: MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. (Siqi Li via gera)

2015-03-10 Thread gera
Repository: hadoop
Updated Branches:
  refs/heads/trunk a380643d2 - aa92b764a


MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. 
(Siqi Li via gera)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa92b764
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa92b764
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa92b764

Branch: refs/heads/trunk
Commit: aa92b764a7ddb888d097121c4d610089a0053d11
Parents: a380643
Author: Gera Shegalov g...@apache.org
Authored: Tue Mar 10 11:12:48 2015 -0700
Committer: Gera Shegalov g...@apache.org
Committed: Tue Mar 10 11:32:08 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|   3 +
 .../lib/output/FileOutputCommitter.java | 119 ++--
 .../src/main/resources/mapred-default.xml   |  54 
 .../hadoop/mapred/TestFileOutputCommitter.java  | 134 +++
 .../lib/output/TestFileOutputCommitter.java | 116 ++--
 5 files changed, 349 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa92b764/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index eecf022..0bbe85c 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -348,6 +348,9 @@ Release 2.7.0 - UNRELEASED
 
 MAPREDUCE-6059. Speed up history server startup time (Siqi Li via aw)
 
+MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output
+files. (Siqi Li via gera)
+
   BUG FIXES
 
 MAPREDUCE-6210. Use getApplicationAttemptId() instead of getApplicationId()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa92b764/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
index 55252f0..28a8548 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.mapreduce.lib.output;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 
 import org.apache.commons.logging.Log;
@@ -25,6 +26,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -57,10 +59,14 @@ public class FileOutputCommitter extends OutputCommitter {
   @Deprecated
   protected static final String TEMP_DIR_NAME = PENDING_DIR_NAME;
   public static final String SUCCEEDED_FILE_NAME = _SUCCESS;
-  public static final String SUCCESSFUL_JOB_OUTPUT_DIR_MARKER = 
-mapreduce.fileoutputcommitter.marksuccessfuljobs;
+  public static final String SUCCESSFUL_JOB_OUTPUT_DIR_MARKER =
+  mapreduce.fileoutputcommitter.marksuccessfuljobs;
+  public static final String FILEOUTPUTCOMMITTER_ALGORITHM_VERSION =
+  mapreduce.fileoutputcommitter.algorithm.version;
+  public static final int FILEOUTPUTCOMMITTER_ALGORITHM_VERSION_DEFAULT = 1;
   private Path outputPath = null;
   private Path workPath = null;
+  private final int algorithmVersion;
 
   /**
* Create a file output committer
@@ -87,6 +93,14 @@ public class FileOutputCommitter extends OutputCommitter {
   @Private
   public FileOutputCommitter(Path outputPath, 
  JobContext context) throws IOException {
+Configuration conf = context.getConfiguration();
+algorithmVersion =
+conf.getInt(FILEOUTPUTCOMMITTER_ALGORITHM_VERSION,
+FILEOUTPUTCOMMITTER_ALGORITHM_VERSION_DEFAULT);
+LOG.info(File Output Committer Algorithm version is  + algorithmVersion);
+if (algorithmVersion != 1  algorithmVersion != 2) {
+  throw new IOException(Only 1 or 2 algorithm version is supported);
+}
 if (outputPath != null) {
  

hadoop git commit: MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. (Siqi Li via gera)

2015-03-10 Thread gera
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2690c7252 - 6802e8fef


MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. 
(Siqi Li via gera)

(cherry picked from commit aa92b764a7ddb888d097121c4d610089a0053d11)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6802e8fe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6802e8fe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6802e8fe

Branch: refs/heads/branch-2
Commit: 6802e8fefc9af806f77e36426a145fd93ba9f009
Parents: 2690c72
Author: Gera Shegalov g...@apache.org
Authored: Tue Mar 10 11:12:48 2015 -0700
Committer: Gera Shegalov g...@apache.org
Committed: Tue Mar 10 11:40:45 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|   3 +
 .../lib/output/FileOutputCommitter.java | 119 ++--
 .../src/main/resources/mapred-default.xml   |  54 
 .../hadoop/mapred/TestFileOutputCommitter.java  | 134 +++
 .../lib/output/TestFileOutputCommitter.java | 116 ++--
 5 files changed, 349 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6802e8fe/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 5efcd32..6b57ddd 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -103,6 +103,9 @@ Release 2.7.0 - UNRELEASED
 
 MAPREDUCE-6059. Speed up history server startup time (Siqi Li via aw)
 
+MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output
+files. (Siqi Li via gera)
+
   BUG FIXES
 
 MAPREDUCE-6210. Use getApplicationAttemptId() instead of getApplicationId()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6802e8fe/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
index 55252f0..28a8548 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.mapreduce.lib.output;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 
 import org.apache.commons.logging.Log;
@@ -25,6 +26,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -57,10 +59,14 @@ public class FileOutputCommitter extends OutputCommitter {
   @Deprecated
   protected static final String TEMP_DIR_NAME = PENDING_DIR_NAME;
   public static final String SUCCEEDED_FILE_NAME = _SUCCESS;
-  public static final String SUCCESSFUL_JOB_OUTPUT_DIR_MARKER = 
-mapreduce.fileoutputcommitter.marksuccessfuljobs;
+  public static final String SUCCESSFUL_JOB_OUTPUT_DIR_MARKER =
+  mapreduce.fileoutputcommitter.marksuccessfuljobs;
+  public static final String FILEOUTPUTCOMMITTER_ALGORITHM_VERSION =
+  mapreduce.fileoutputcommitter.algorithm.version;
+  public static final int FILEOUTPUTCOMMITTER_ALGORITHM_VERSION_DEFAULT = 1;
   private Path outputPath = null;
   private Path workPath = null;
+  private final int algorithmVersion;
 
   /**
* Create a file output committer
@@ -87,6 +93,14 @@ public class FileOutputCommitter extends OutputCommitter {
   @Private
   public FileOutputCommitter(Path outputPath, 
  JobContext context) throws IOException {
+Configuration conf = context.getConfiguration();
+algorithmVersion =
+conf.getInt(FILEOUTPUTCOMMITTER_ALGORITHM_VERSION,
+FILEOUTPUTCOMMITTER_ALGORITHM_VERSION_DEFAULT);
+LOG.info(File Output Committer Algorithm version is  + algorithmVersion);
+if (algorithmVersion != 1  algorithmVersion != 2) {
+  throw new IOException(Only 1 

hadoop git commit: HADOOP-11568. Description on usage of classpath in hadoop command is incomplete. ( Contributed by Archana T )

2015-03-10 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 5efee5efd - 2f902a823


HADOOP-11568. Description on usage of classpath in hadoop command is 
incomplete. ( Contributed by Archana T )


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f902a82
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f902a82
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f902a82

Branch: refs/heads/branch-2
Commit: 2f902a823cc72b120e9a19983958f9f0873feeb4
Parents: 5efee5e
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Mar 10 13:02:56 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Tue Mar 10 13:02:56 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 hadoop-common-project/hadoop-common/src/main/bin/hadoop | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f902a82/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index d5b3418..cfbe92f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -15,6 +15,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+HADOOP-11568. Description on usage of classpath in hadoop command is
+incomplete. ( Archana T via vinayakumarb )
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f902a82/hadoop-common-project/hadoop-common/src/main/bin/hadoop
--
diff --git a/hadoop-common-project/hadoop-common/src/main/bin/hadoop 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop
index b71aa8c..980cd0c 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop
@@ -45,8 +45,8 @@ function print_usage(){
   echo   distcp srcurl desturl copy file or directories recursively
   echo   archive -archiveName NAME -p parent path src* dest create a 
hadoop archive
   echo   classpathprints the class path needed to get the
-  echo   credential   interact with credential providers
   echoHadoop jar and the required libraries
+  echo   credential   interact with credential providers
   echo   daemonlogget/set the log level for each daemon
   echo   traceview and modify Hadoop tracing settings
   echo