[02/29] hadoop git commit: HDFS-7448 TestBookKeeperHACheckpoints fails in trunk -move CHANGES.TXT entry

2014-12-08 Thread vinayakumarb
HDFS-7448 TestBookKeeperHACheckpoints fails in trunk -move CHANGES.TXT entry


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/22afae89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/22afae89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/22afae89

Branch: refs/heads/HDFS-EC
Commit: 22afae890d7cf34a9be84590e7457774755b7a4a
Parents: e65b7c5
Author: Steve Loughran ste...@apache.org
Authored: Wed Dec 3 12:21:42 2014 +
Committer: Steve Loughran ste...@apache.org
Committed: Wed Dec 3 12:21:42 2014 +

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/22afae89/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 85d00b7..1679a71 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -257,9 +257,6 @@ Trunk (Unreleased)
 
 HDFS-7407. Minor typo in privileged pid/out/log names (aw)
 
-HDFS-7448 TestBookKeeperHACheckpoints fails in trunk build
-(Akira Ajisaka via stevel)
-
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -522,6 +519,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7444. convertToBlockUnderConstruction should preserve BlockCollection.
 (wheat9)
 
+HDFS-7448 TestBookKeeperHACheckpoints fails in trunk build
+(Akira Ajisaka via stevel)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES



[05/29] hadoop git commit: HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs test to avoid confusion. Contributed by Yongjun Zhang

2014-12-08 Thread vinayakumarb
HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs test 
to avoid confusion. Contributed by Yongjun Zhang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a1e82259
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a1e82259
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a1e82259

Branch: refs/heads/HDFS-EC
Commit: a1e822595c3dc5eadbd5430e57bc4691d09d5e68
Parents: 1812241
Author: Brandon Li brando...@apache.org
Authored: Wed Dec 3 13:31:26 2014 -0800
Committer: Brandon Li brando...@apache.org
Committed: Wed Dec 3 13:31:26 2014 -0800

--
 .../hadoop-hdfs-nfs/src/test/resources/core-site.xml  | 14 ++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  3 +++
 2 files changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1e82259/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
index f90ca03..f400bf2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
@@ -20,10 +20,24 @@
 property
   namenfs.server.port/name
   value2079/value
+  description
+Specify the port number used by Hadoop NFS.
+Notice that the default value here is different than the default Hadoop nfs
+port 2049 specified in hdfs-default.xml. 2049 is also the default port for
+Linux nfs. The setting here allows starting Hadoop nfs for testing even if
+nfs server (linux or Hadoop) is aready running on he same host.
+  /description
 /property
 
 property
   namenfs.mountd.port/name
   value4272/value
+  description
+Specify the port number used by Hadoop mount daemon.
+Notice that the default value here is different than 4242 specified in 
+hdfs-default.xml. This setting allows starting Hadoop nfs mountd for
+testing even if the Linux or Hadoop mountd is already running on the
+same host.
+  /description
 /property
 /configuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1e82259/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1679a71..a244dab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -421,6 +421,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-6735. A minor optimization to avoid pread() be blocked by read()
 inside the same DFSInputStream (Lars Hofhansl via stack)
+
+HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs
+test to avoid confusion (Yongjun Zhang via brandonli)
 
   OPTIMIZATIONS
 



[07/29] hadoop git commit: YARN-2891. Failed Container Executor does not provide a clear error message. Contributed by Dustin Cote. (harsh)

2014-12-08 Thread vinayakumarb
YARN-2891. Failed Container Executor does not provide a clear error message. 
Contributed by Dustin Cote. (harsh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a31e0164
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a31e0164
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a31e0164

Branch: refs/heads/HDFS-EC
Commit: a31e0164912236630c485e5aeb908b43e3a67c61
Parents: 799353e
Author: Harsh J ha...@cloudera.com
Authored: Thu Dec 4 03:16:08 2014 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Thu Dec 4 03:17:15 2014 +0530

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../src/main/native/container-executor/impl/container-executor.c  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a31e0164/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index d44f46d..91151ad 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -59,6 +59,9 @@ Release 2.7.0 - UNRELEASED
 
   IMPROVEMENTS
 
+YARN-2891. Failed Container Executor does not provide a clear error
+message. (Dustin Cote via harsh)
+
 YARN-1979. TestDirectoryCollection fails when the umask is unusual.
 (Vinod Kumar Vavilapalli and Tsuyoshi OZAWA via junping_du)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a31e0164/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 9af9161..4fc78b6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -526,7 +526,7 @@ int check_dir(char* npath, mode_t st_mode, mode_t desired, 
int finalComponent) {
 int filePermInt = st_mode  (S_IRWXU | S_IRWXG | S_IRWXO);
 int desiredInt = desired  (S_IRWXU | S_IRWXG | S_IRWXO);
 if (filePermInt != desiredInt) {
-  fprintf(LOGFILE, Path %s does not have desired permission.\n, npath);
+  fprintf(LOGFILE, Path %s has permission %o but needs permission %o.\n, 
npath, filePermInt, desiredInt);
   return -1;
 }
   }



[27/29] hadoop git commit: HADOOP-11313. Adding a document about NativeLibraryChecker. Contributed by Tsuyoshi OZAWA.

2014-12-08 Thread vinayakumarb
HADOOP-11313. Adding a document about NativeLibraryChecker. Contributed by 
Tsuyoshi OZAWA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b3bb9e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b3bb9e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b3bb9e7

Branch: refs/heads/HDFS-EC
Commit: 1b3bb9e7a33716c4d94786598b91a24a4b29fe67
Parents: 9297f98
Author: cnauroth cnaur...@apache.org
Authored: Sat Dec 6 20:12:31 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Sat Dec 6 20:12:31 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../src/site/apt/NativeLibraries.apt.vm   | 18 ++
 2 files changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b3bb9e7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 965c6d3..a626388 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -413,6 +413,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11356. Removed deprecated 
o.a.h.fs.permission.AccessControlException.
 (Li Lu via wheat9)
 
+HADOOP-11313. Adding a document about NativeLibraryChecker.
+(Tsuyoshi OZAWA via cnauroth)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b3bb9e7/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm 
b/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
index 49818af..866b428 100644
--- a/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
+++ b/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
@@ -164,6 +164,24 @@ Native Libraries Guide
  * If something goes wrong, then:
INFO util.NativeCodeLoader - Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
 
+* Check
+
+   NativeLibraryChecker is a tool to check whether native libraries are loaded 
correctly.
+   You can launch NativeLibraryChecker as follows:
+
+
+   $ hadoop checknative -a
+   14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize 
native-bzip2 library system-native, will use pure-Java version
+   14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded  initialized 
native-zlib library
+   Native library checking:
+   hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
+   zlib:   true /lib/x86_64-linux-gnu/libz.so.1
+   snappy: true /usr/lib/libsnappy.so.1
+   lz4:true revision:99
+   bzip2:  false
+
+
+
 * Native Shared Libraries
 
You can load any native shared library using DistributedCache for



[29/29] hadoop git commit: Merge remote-tracking branch 'origin/trunk' into HDFS-EC

2014-12-08 Thread vinayakumarb
Merge remote-tracking branch 'origin/trunk' into HDFS-EC


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/83318534
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/83318534
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/83318534

Branch: refs/heads/HDFS-EC
Commit: 8331853450bc52fb3d8eed61fde52639a5fdb619
Parents: bdc0101 120e1de
Author: Vinayakumar B vinayakuma...@intel.com
Authored: Mon Dec 8 14:42:52 2014 +0530
Committer: Vinayakumar B vinayakuma...@intel.com
Committed: Mon Dec 8 14:42:52 2014 +0530

--
 .../client/KerberosAuthenticator.java   |   6 +-
 hadoop-common-project/hadoop-common/CHANGES.txt |  21 +
 .../hadoop-common/src/CMakeLists.txt|   2 +-
 .../apache/hadoop/crypto/AesCtrCryptoCodec.java |  27 +-
 .../fs/permission/AccessControlException.java   |  66 ---
 .../hadoop/security/AccessControlException.java |   4 +-
 .../src/site/apt/NativeLibraries.apt.vm |  18 +
 .../apache/hadoop/crypto/TestCryptoCodec.java   |  64 +++
 .../hadoop/crypto/key/kms/server/KMSACLs.java   |  26 +-
 .../kms/server/KeyAuthorizationKeyProvider.java |   4 +
 .../hadoop/crypto/key/kms/server/TestKMS.java   |  13 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml |   5 +
 .../hadoop/hdfs/nfs/conf/NfsConfigKeys.java |  10 +
 .../hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java| 111 
 .../hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java|  24 +-
 .../hdfs/nfs/nfs3/TestNfs3HttpServer.java   |  89 
 .../src/test/resources/core-site.xml|  14 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  31 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   3 +
 .../datanode/ReplicaNotFoundException.java  |   2 +-
 .../server/namenode/AclEntryStatusFormat.java   | 136 +
 .../hadoop/hdfs/server/namenode/AclFeature.java |  24 +-
 .../hadoop/hdfs/server/namenode/AclStorage.java |  61 +--
 .../server/namenode/EncryptionZoneManager.java  |   4 +-
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java | 244 +
 .../hdfs/server/namenode/FSDirConcatOp.java |   9 +-
 .../hdfs/server/namenode/FSDirMkdirOp.java  |  17 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |  74 ++-
 .../hdfs/server/namenode/FSDirSnapshotOp.java   |  68 ++-
 .../server/namenode/FSDirStatAndListingOp.java  |  35 +-
 .../hdfs/server/namenode/FSDirectory.java   | 329 +++-
 .../hdfs/server/namenode/FSEditLogLoader.java   |  18 +-
 .../server/namenode/FSImageFormatPBINode.java   |  22 +-
 .../hdfs/server/namenode/FSNDNCacheOp.java  | 124 +
 .../hdfs/server/namenode/FSNamesystem.java  | 531 +++
 .../server/namenode/FSPermissionChecker.java|  61 +--
 .../hdfs/server/namenode/INodesInPath.java  |  21 +-
 .../hadoop/hdfs/server/namenode/NNConf.java | 104 
 .../snapshot/FSImageFormatPBSnapshot.java   |  13 +-
 .../namenode/snapshot/SnapshotManager.java  |  50 +-
 .../hdfs/server/namenode/FSAclBaseTest.java |   4 +-
 .../hdfs/server/namenode/TestAuditLogger.java   |  79 +--
 .../namenode/TestFSPermissionChecker.java   |  10 +-
 .../server/namenode/TestSnapshotPathINodes.java |  20 +-
 .../namenode/snapshot/TestSnapshotManager.java  |  14 +-
 hadoop-mapreduce-project/CHANGES.txt|   3 +
 .../apache/hadoop/mapred/MapReduceChildJVM.java |  34 +-
 .../v2/app/job/impl/TestMapReduceChildJVM.java  |  71 ++-
 .../apache/hadoop/mapreduce/v2/util/MRApps.java |  80 ++-
 .../apache/hadoop/mapred/FileOutputFormat.java  |   4 +-
 .../java/org/apache/hadoop/mapred/TaskLog.java  |   4 +
 .../apache/hadoop/mapreduce/MRJobConfig.java|  14 +
 .../src/main/resources/mapred-default.xml   |  28 +
 .../org/apache/hadoop/mapred/YARNRunner.java|   9 +-
 hadoop-yarn-project/CHANGES.txt |  28 +
 hadoop-yarn-project/hadoop-yarn/bin/yarn|   5 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |   1 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  16 +-
 .../yarn/server/api/SCMAdminProtocol.java   |  53 ++
 .../yarn/server/api/SCMAdminProtocolPB.java |  31 ++
 .../RunSharedCacheCleanerTaskRequest.java   |  37 ++
 .../RunSharedCacheCleanerTaskResponse.java  |  58 ++
 .../main/proto/server/SCM_Admin_protocol.proto  |  29 +
 .../src/main/proto/yarn_service_protos.proto|  11 +
 .../org/apache/hadoop/yarn/client/SCMAdmin.java | 183 +++
 .../hadoop/yarn/client/cli/ApplicationCLI.java  |   8 +-
 .../hadoop/yarn/client/cli/TestYarnCLI.java |  41 +-
 .../hadoop/yarn/ContainerLogAppender.java   |  11 +-
 .../yarn/ContainerRollingLogAppender.java   |  11 +-
 .../pb/client/SCMAdminProtocolPBClientImpl.java |  73 +++
 .../service/SCMAdminProtocolPBServiceImpl.java  |  57 ++
 .../RunSharedCacheCleanerTaskRequestPBImpl.java |  53 ++
 ...RunSharedCacheCleanerTaskResponsePBImpl.java |  66 +++
 .../src/main/resources/yarn-default.xml

[08/29] hadoop git commit: YARN-2880. Added a test to make sure node labels will be recovered if RM restart is enabled. Contributed by Rohith Sharmaks

2014-12-08 Thread vinayakumarb
YARN-2880. Added a test to make sure node labels will be recovered if RM 
restart is enabled. Contributed by Rohith Sharmaks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/73fbb3c6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/73fbb3c6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/73fbb3c6

Branch: refs/heads/HDFS-EC
Commit: 73fbb3c66b0d90abee49c766ee9d2f05517cb9de
Parents: a31e016
Author: Jian He jia...@apache.org
Authored: Wed Dec 3 17:14:52 2014 -0800
Committer: Jian He jia...@apache.org
Committed: Wed Dec 3 17:14:52 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../server/resourcemanager/TestRMRestart.java   | 91 
 2 files changed, 94 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/73fbb3c6/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 91151ad..30b9260 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -57,6 +57,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2765. Added leveldb-based implementation for RMStateStore. (Jason Lowe
 via jianhe)
 
+YARN-2880. Added a test to make sure node labels will be recovered
+if RM restart is enabled. (Rohith Sharmaks via jianhe)
+
   IMPROVEMENTS
 
 YARN-2891. Failed Container Executor does not provide a clear error

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73fbb3c6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
index a42170b..29f0208 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
@@ -69,6 +69,7 @@ import org.apache.hadoop.yarn.api.records.Container;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ContainerState;
 import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
+import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
@@ -82,6 +83,7 @@ import 
org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
 import org.apache.hadoop.yarn.server.api.records.NodeAction;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import 
org.apache.hadoop.yarn.server.resourcemanager.recovery.MemoryRMStateStore;
 import org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore;
 import 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.RMState;
@@ -105,6 +107,9 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Sets;
+
 public class TestRMRestart extends ParameterizedSchedulerTestBase {
   private final static File TEMP_DIR = new File(System.getProperty(
 test.build.data, /tmp), decommision);
@@ -2036,4 +2041,90 @@ public class TestRMRestart extends 
ParameterizedSchedulerTestBase {
 }
   }
 
+  // Test does following verification
+  // 1. Start RM1 with store patch /tmp
+  // 2. Add/remove/replace labels to cluster and node lable and verify
+  // 3. Start RM2 with store patch /tmp only
+  // 4. Get cluster and node lobel, it should be present by recovering it
+  @Test(timeout = 2)
+  public void testRMRestartRecoveringNodeLabelManager() throws Exception {
+MemoryRMStateStore memStore = new MemoryRMStateStore();
+memStore.init(conf);
+MockRM rm1 = new MockRM(conf, memStore) {
+  @Override
+  protected RMNodeLabelsManager createNodeLabelManager() {
+RMNodeLabelsManager mgr = new RMNodeLabelsManager();
+mgr.init(getConfig());
+return mgr;
+  }
+};
+ 

[13/29] hadoop git commit: HDFS-7468. Moving verify* functions to corresponding classes. Contributed by Li Lu.

2014-12-08 Thread vinayakumarb
HDFS-7468. Moving verify* functions to corresponding classes. Contributed by Li 
Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26d8dec7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26d8dec7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26d8dec7

Branch: refs/heads/HDFS-EC
Commit: 26d8dec756da1d9bd3df3b41a4dd5d8ff03bc5b2
Parents: 258623f
Author: Haohui Mai whe...@apache.org
Authored: Thu Dec 4 14:09:45 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Thu Dec 4 14:09:45 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/server/namenode/FSDirRenameOp.java | 54 +--
 .../hdfs/server/namenode/FSDirSnapshotOp.java   | 20 +-
 .../hdfs/server/namenode/FSDirectory.java   | 72 ++--
 4 files changed, 78 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26d8dec7/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 2775285..4432024 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -427,6 +427,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs
 test to avoid confusion (Yongjun Zhang via brandonli)
 
+HDFS-7468. Moving verify* functions to corresponding classes.
+(Li Lu via wheat9)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26d8dec7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
index f371f05..08241c4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.FSLimitException;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
 import org.apache.hadoop.hdfs.protocol.SnapshotException;
@@ -73,6 +74,51 @@ class FSDirRenameOp {
   }
 
   /**
+   * Verify quota for rename operation where srcInodes[srcInodes.length-1] 
moves
+   * dstInodes[dstInodes.length-1]
+   */
+  static void verifyQuotaForRename(FSDirectory fsd,
+  INode[] src, INode[] dst)
+  throws QuotaExceededException {
+if (!fsd.getFSNamesystem().isImageLoaded() || fsd.shouldSkipQuotaChecks()) 
{
+  // Do not check quota if edits log is still being processed
+  return;
+}
+int i = 0;
+while(src[i] == dst[i]) { i++; }
+// src[i - 1] is the last common ancestor.
+
+final Quota.Counts delta = src[src.length - 1].computeQuotaUsage();
+
+// Reduce the required quota by dst that is being removed
+final int dstIndex = dst.length - 1;
+if (dst[dstIndex] != null) {
+  delta.subtract(dst[dstIndex].computeQuotaUsage());
+}
+FSDirectory.verifyQuota(dst, dstIndex, delta.get(Quota.NAMESPACE),
+delta.get(Quota.DISKSPACE), src[i - 1]);
+  }
+
+  /**
+   * Checks file system limits (max component length and max directory items)
+   * during a rename operation.
+   */
+  static void verifyFsLimitsForRename(FSDirectory fsd,
+  INodesInPath srcIIP,
+  INodesInPath dstIIP)
+  throws FSLimitException.PathComponentTooLongException,
+  FSLimitException.MaxDirectoryItemsExceededException {
+byte[] dstChildName = dstIIP.getLastLocalName();
+INode[] dstInodes = dstIIP.getINodes();
+int pos = dstInodes.length - 1;
+fsd.verifyMaxComponentLength(dstChildName, dstInodes, pos);
+// Do not enforce max directory items if renaming within same directory.
+if (srcIIP.getINode(-2) != dstIIP.getINode(-2)) {
+  fsd.verifyMaxDirItems(dstInodes, pos);
+}
+  }
+
+  /**
* Change a path name
*
* @param fsd FSDirectory
@@ -129,8 +175,8 @@ class FSDirRenameOp {
 
 fsd.ezManager.checkMoveValidity(srcIIP, dstIIP, src);
 // Ensure dst has quota to accommodate rename
-fsd.verifyFsLimitsForRename(srcIIP, dstIIP);
-fsd.verifyQuotaForRename(srcIIP.getINodes(), 

[18/29] hadoop git commit: HDFS-7472. Fix typo in message of ReplicaNotFoundException. Contributed by Masatake Iwasaki.

2014-12-08 Thread vinayakumarb
HDFS-7472. Fix typo in message of ReplicaNotFoundException. Contributed by 
Masatake Iwasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f6452eb2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f6452eb2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f6452eb2

Branch: refs/heads/HDFS-EC
Commit: f6452eb2592a9350bc3f6ce1e354ea55b275ff83
Parents: 6a5596e
Author: Haohui Mai whe...@apache.org
Authored: Fri Dec 5 11:23:13 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Fri Dec 5 11:23:13 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../hadoop/hdfs/server/datanode/ReplicaNotFoundException.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6452eb2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c6cb185..22f462f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -536,6 +536,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7448 TestBookKeeperHACheckpoints fails in trunk build
 (Akira Ajisaka via stevel)
 
+HDFS-7472. Fix typo in message of ReplicaNotFoundException.
+(Masatake Iwasaki via wheat9)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6452eb2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
index 124574b..b159d3a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
@@ -37,7 +37,7 @@ public class ReplicaNotFoundException extends IOException {
   public final static String NON_EXISTENT_REPLICA =
 Cannot append to a non-existent replica ;
   public final static String UNEXPECTED_GS_REPLICA =
-Cannot append to a replica with unexpeted generation stamp ;
+Cannot append to a replica with unexpected generation stamp ;
 
   public ReplicaNotFoundException() {
 super();



[10/29] hadoop git commit: HDFS-7424. Add web UI for NFS gateway. Contributed by Brandon Li

2014-12-08 Thread vinayakumarb
HDFS-7424. Add web UI for NFS gateway. Contributed by Brandon Li


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1bbcc3d0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1bbcc3d0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1bbcc3d0

Branch: refs/heads/HDFS-EC
Commit: 1bbcc3d0320b9435317bfeaa078af22d4de8d00c
Parents: 9d1a8f5
Author: Brandon Li brando...@apache.org
Authored: Thu Dec 4 10:46:26 2014 -0800
Committer: Brandon Li brando...@apache.org
Committed: Thu Dec 4 10:46:26 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml |   5 +
 .../hadoop/hdfs/nfs/conf/NfsConfigKeys.java |  10 ++
 .../hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java| 111 +++
 .../hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java|  24 +++-
 .../hdfs/nfs/nfs3/TestNfs3HttpServer.java   |  89 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   2 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   3 +
 7 files changed, 242 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bbcc3d0/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
index 42962a6..9a9d29c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
@@ -179,6 +179,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;
   artifactIdxmlenc/artifactId
   scopecompile/scope
 /dependency
+dependency
+  groupIdorg.bouncycastle/groupId
+  artifactIdbcprov-jdk16/artifactId
+  scopetest/scope
+/dependency
   /dependencies
 
   profiles

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bbcc3d0/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
index 178d855..7566791 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
@@ -60,4 +60,14 @@ public class NfsConfigKeys {
   
   public final static String LARGE_FILE_UPLOAD = nfs.large.file.upload;
   public final static boolean LARGE_FILE_UPLOAD_DEFAULT = true;
+  
+  public static final String NFS_HTTP_PORT_KEY = nfs.http.port;
+  public static final int NFS_HTTP_PORT_DEFAULT = 50079;
+  public static final String NFS_HTTP_ADDRESS_KEY = nfs.http.address;
+  public static final String NFS_HTTP_ADDRESS_DEFAULT = 0.0.0.0: + 
NFS_HTTP_PORT_DEFAULT;
+
+  public static final String NFS_HTTPS_PORT_KEY = nfs.https.port;
+  public static final int NFS_HTTPS_PORT_DEFAULT = 50579;
+  public static final String NFS_HTTPS_ADDRESS_KEY = nfs.https.address;
+  public static final String NFS_HTTPS_ADDRESS_DEFAULT = 0.0.0.0: + 
NFS_HTTPS_PORT_DEFAULT;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bbcc3d0/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
new file mode 100644
index 000..c37a21e
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
@@ -0,0 +1,111 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.nfs.nfs3;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import 

[17/29] hadoop git commit: HDFS-7478. Move org.apache.hadoop.hdfs.server.namenode.NNConf to FSNamesystem. Contributed by Li Lu.

2014-12-08 Thread vinayakumarb
HDFS-7478. Move org.apache.hadoop.hdfs.server.namenode.NNConf to FSNamesystem. 
Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6a5596e3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6a5596e3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6a5596e3

Branch: refs/heads/HDFS-EC
Commit: 6a5596e3b4443462fc86f800b3c2eb839d44c3bd
Parents: 2829b7a
Author: Haohui Mai whe...@apache.org
Authored: Fri Dec 5 10:55:13 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Fri Dec 5 10:55:13 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hdfs/server/namenode/FSNamesystem.java  |  66 +---
 .../hadoop/hdfs/server/namenode/NNConf.java | 104 ---
 3 files changed, 54 insertions(+), 119 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a5596e3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 02f41cc..c6cb185 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -430,6 +430,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7468. Moving verify* functions to corresponding classes.
 (Li Lu via wheat9)
 
+HDFS-7478. Move org.apache.hadoop.hdfs.server.namenode.NNConf to
+FSNamesystem. (Li Lu via wheat9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a5596e3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index a6e88c6..22039fc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -532,7 +532,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 
   private final RetryCache retryCache;
 
-  private final NNConf nnConf;
+  private final boolean aclsEnabled;
+  private final boolean xattrsEnabled;
+  private final int xattrMaxSize;
 
   private KeyProviderCryptoExtension provider = null;
 
@@ -848,7 +850,23 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   this.isDefaultAuditLogger = auditLoggers.size() == 1 
 auditLoggers.get(0) instanceof DefaultAuditLogger;
   this.retryCache = ignoreRetryCache ? null : initRetryCache(conf);
-  this.nnConf = new NNConf(conf);
+
+  this.aclsEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY,
+  DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_DEFAULT);
+  LOG.info(ACLs enabled?  + aclsEnabled);
+  this.xattrsEnabled = conf.getBoolean(
+  DFSConfigKeys.DFS_NAMENODE_XATTRS_ENABLED_KEY,
+  DFSConfigKeys.DFS_NAMENODE_XATTRS_ENABLED_DEFAULT);
+  LOG.info(XAttrs enabled?  + xattrsEnabled);
+  this.xattrMaxSize = conf.getInt(
+  DFSConfigKeys.DFS_NAMENODE_MAX_XATTR_SIZE_KEY,
+  DFSConfigKeys.DFS_NAMENODE_MAX_XATTR_SIZE_DEFAULT);
+  Preconditions.checkArgument(xattrMaxSize = 0,
+  Cannot set a negative value for the maximum size of an xattr (%s).,
+  DFSConfigKeys.DFS_NAMENODE_MAX_XATTR_SIZE_KEY);
+  final String unlimited = xattrMaxSize == 0 ?  (unlimited) : ;
+  LOG.info(Maximum size of an xattr:  + xattrMaxSize + unlimited);
 } catch(IOException e) {
   LOG.error(getClass().getSimpleName() +  initialization failed., e);
   close();
@@ -7827,7 +7845,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   void modifyAclEntries(final String srcArg, ListAclEntry aclSpec)
   throws IOException {
 String src = srcArg;
-nnConf.checkAclsConfigFlag();
+checkAclsConfigFlag();
 HdfsFileStatus resultingStat = null;
 FSPermissionChecker pc = getPermissionChecker();
 checkOperation(OperationCategory.WRITE);
@@ -7854,7 +7872,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   void removeAclEntries(final String srcArg, ListAclEntry aclSpec)
   throws IOException {
 String src = srcArg;
-nnConf.checkAclsConfigFlag();
+checkAclsConfigFlag();
 HdfsFileStatus resultingStat = null;
 FSPermissionChecker pc = getPermissionChecker();
 

[23/29] hadoop git commit: YARN-2869. CapacityScheduler should trim sub queue names when parse configuration. Contributed by Wangda Tan

2014-12-08 Thread vinayakumarb
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e69af836
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e69af836
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e69af836

Branch: refs/heads/HDFS-EC
Commit: e69af836f34f16fba565ab112c9bf0d367675b16
Parents: 475c6b4
Author: Jian He jia...@apache.org
Authored: Fri Dec 5 17:33:39 2014 -0800
Committer: Jian He jia...@apache.org
Committed: Fri Dec 5 17:33:39 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../CapacitySchedulerConfiguration.java |  10 +-
 .../scheduler/capacity/TestQueueParsing.java| 110 +++
 3 files changed, 122 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e69af836/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0b88959..0d7a843 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -194,6 +194,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in
 YarnConfiguration. (rchiang via rkanter)
 
+YARN-2869. CapacityScheduler should trim sub queue names when parse
+configuration. (Wangda Tan via jianhe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e69af836/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
index 0a49224..5bbb436 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
@@ -260,7 +260,7 @@ public class CapacitySchedulerConfiguration extends 
ReservationSchedulerConfigur
 }
   }
 
-  private String getQueuePrefix(String queue) {
+  static String getQueuePrefix(String queue) {
 String queueName = PREFIX + queue + DOT;
 return queueName;
   }
@@ -538,6 +538,14 @@ public class CapacitySchedulerConfiguration extends 
ReservationSchedulerConfigur
   public String[] getQueues(String queue) {
 LOG.debug(CSConf - getQueues called for: queuePrefix= + 
getQueuePrefix(queue));
 String[] queues = getStrings(getQueuePrefix(queue) + QUEUES);
+ListString trimmedQueueNames = new ArrayListString();
+if (null != queues) {
+  for (String s : queues) {
+trimmedQueueNames.add(s.trim());
+  }
+  queues = trimmedQueueNames.toArray(new String[0]);
+}
+ 
 LOG.debug(CSConf - getQueues: queuePrefix= + getQueuePrefix(queue) + 
 , queues= + ((queues == null) ?  : 
StringUtils.arrayToString(queues)));
 return queues;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e69af836/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
index cf2e5ce..5a9fbe1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
+++ 

[06/29] hadoop git commit: YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute any further apps. (Naganarasimha G R via kasha)

2014-12-08 Thread vinayakumarb
YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute any 
further apps. (Naganarasimha G R via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/799353e2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/799353e2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/799353e2

Branch: refs/heads/HDFS-EC
Commit: 799353e2c7db5af6e40e3521439b5c8a3c5c6a51
Parents: a1e8225
Author: Karthik Kambatla ka...@apache.org
Authored: Wed Dec 3 13:44:41 2014 -0800
Committer: Karthik Kambatla ka...@apache.org
Committed: Wed Dec 3 13:44:41 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../security/DelegationTokenRenewer.java| 12 ++--
 2 files changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/799353e2/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 421e5ea..d44f46d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -175,6 +175,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2894. Fixed a bug regarding application view acl when RM fails over.
 (Rohith Sharmaks via jianhe)
 
+YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute
+any further apps. (Naganarasimha G R via kasha)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/799353e2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
index 2dc331e..cca6e8d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
@@ -20,7 +20,6 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.security;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
-import java.security.PrivilegedAction;
 import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
 import java.util.Collection;
@@ -39,6 +38,7 @@ import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.ThreadPoolExecutor;
 import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
@@ -445,15 +445,15 @@ public class DelegationTokenRenewer extends 
AbstractService {
*/
   private class RenewalTimerTask extends TimerTask {
 private DelegationTokenToRenew dttr;
-private boolean cancelled = false;
+private AtomicBoolean cancelled = new AtomicBoolean(false);
 
 RenewalTimerTask(DelegationTokenToRenew t) {  
   dttr = t;  
 }
 
 @Override
-public synchronized void run() {
-  if (cancelled) {
+public void run() {
+  if (cancelled.get()) {
 return;
   }
 
@@ -475,8 +475,8 @@ public class DelegationTokenRenewer extends AbstractService 
{
 }
 
 @Override
-public synchronized boolean cancel() {
-  cancelled = true;
+public boolean cancel() {
+  cancelled.set(true);
   return super.cancel();
 }
   }



[15/29] hadoop git commit: HDFS-7454. Reduce memory footprint for AclEntries in NameNode. Contributed by Vinayakumar B.

2014-12-08 Thread vinayakumarb
HDFS-7454. Reduce memory footprint for AclEntries in NameNode. Contributed by 
Vinayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0653918d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0653918d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0653918d

Branch: refs/heads/HDFS-EC
Commit: 0653918dad855b394e8e3b8b3f512f474d872ee9
Parents: 7896815
Author: Haohui Mai whe...@apache.org
Authored: Thu Dec 4 20:49:45 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Thu Dec 4 20:49:45 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/namenode/AclEntryStatusFormat.java   | 136 +++
 .../hadoop/hdfs/server/namenode/AclFeature.java |  24 +++-
 .../hadoop/hdfs/server/namenode/AclStorage.java |  30 +++-
 .../server/namenode/FSImageFormatPBINode.java   |  22 +--
 .../server/namenode/FSPermissionChecker.java|  39 ++
 .../snapshot/FSImageFormatPBSnapshot.java   |  13 +-
 .../hdfs/server/namenode/FSAclBaseTest.java |   4 +-
 8 files changed, 223 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0653918d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 4432024..02f41cc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -432,6 +432,9 @@ Release 2.7.0 - UNRELEASED
 
   OPTIMIZATIONS
 
+HDFS-7454. Reduce memory footprint for AclEntries in NameNode.
+(Vinayakumar B via wheat9)
+
   BUG FIXES
 
 HDFS-6741. Improve permission denied message when

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0653918d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
new file mode 100644
index 000..82aa214
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.List;
+
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclEntryScope;
+import org.apache.hadoop.fs.permission.AclEntryType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.hdfs.util.LongBitFormat;
+
+import com.google.common.collect.ImmutableList;
+
+/**
+ * Class to pack an AclEntry into an integer. br
+ * An ACL entry is represented by a 32-bit integer in Big Endian format. br
+ * The bits can be divided in four segments: br
+ * [0:1) || [1:3) || [3:6) || [6:7) || [7:32) br
+ * br
+ * [0:1) -- the scope of the entry (AclEntryScope) br
+ * [1:3) -- the type of the entry (AclEntryType) br
+ * [3:6) -- the permission of the entry (FsAction) br
+ * [6:7) -- A flag to indicate whether Named entry or not br
+ * [7:32) -- the name of the entry, which is an ID that points to a br
+ * string in the StringTableSection. br
+ */
+public enum AclEntryStatusFormat {
+
+  SCOPE(null, 1),
+  TYPE(SCOPE.BITS, 2),
+  PERMISSION(TYPE.BITS, 3),
+  NAMED_ENTRY_CHECK(PERMISSION.BITS, 1),
+  NAME(NAMED_ENTRY_CHECK.BITS, 25);
+
+  private final LongBitFormat BITS;
+
+  private AclEntryStatusFormat(LongBitFormat previous, int length) {
+BITS = new LongBitFormat(name(), previous, length, 0);
+  }
+
+  static AclEntryScope getScope(int aclEntry) {
+int ordinal = (int) SCOPE.BITS.retrieve(aclEntry);
+return AclEntryScope.values()[ordinal];
+  }
+
+  static AclEntryType getType(int aclEntry) {
+int 

[04/29] hadoop git commit: HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL and whitelist key ACL. Contributed by Dian Fu.

2014-12-08 Thread vinayakumarb
HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL and 
whitelist key ACL. Contributed by Dian Fu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1812241e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1812241e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1812241e

Branch: refs/heads/HDFS-EC
Commit: 1812241ee10c0a98844bffb9341f770d54655f52
Parents: 03ab24a
Author: Andrew Wang w...@apache.org
Authored: Wed Dec 3 12:00:14 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Wed Dec 3 12:00:14 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../hadoop/crypto/key/kms/server/KMSACLs.java   | 26 ++--
 .../hadoop/crypto/key/kms/server/TestKMS.java   |  5 +++-
 3 files changed, 26 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1812241e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2f17f22..7a2159f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -493,6 +493,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11344. KMS kms-config.sh sets a default value for the keystore
 password even in non-ssl setup. (Arun Suresh via wang)
 
+HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL
+and whitelist key ACL. (Dian Fu via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1812241e/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
index 0217589..c33dd4b 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
@@ -152,20 +152,30 @@ public class KMSACLs implements Runnable, KeyACLs {
 String confKey = KMSConfiguration.DEFAULT_KEY_ACL_PREFIX + keyOp;
 String aclStr = conf.get(confKey);
 if (aclStr != null) {
-  if (aclStr.equals(*)) {
-LOG.info(Default Key ACL for KEY_OP '{}' is set to '*', keyOp);
+  if (keyOp == KeyOpType.ALL) {
+// Ignore All operation for default key acl
+LOG.warn(Should not configure default key ACL for KEY_OP '{}', 
keyOp);
+  } else {
+if (aclStr.equals(*)) {
+  LOG.info(Default Key ACL for KEY_OP '{}' is set to '*', keyOp);
+}
+defaultKeyAcls.put(keyOp, new AccessControlList(aclStr));
   }
-  defaultKeyAcls.put(keyOp, new AccessControlList(aclStr));
 }
   }
   if (!whitelistKeyAcls.containsKey(keyOp)) {
 String confKey = KMSConfiguration.WHITELIST_KEY_ACL_PREFIX + keyOp;
 String aclStr = conf.get(confKey);
 if (aclStr != null) {
-  if (aclStr.equals(*)) {
-LOG.info(Whitelist Key ACL for KEY_OP '{}' is set to '*', keyOp);
+  if (keyOp == KeyOpType.ALL) {
+// Ignore All operation for whitelist key acl
+LOG.warn(Should not configure whitelist key ACL for KEY_OP '{}', 
keyOp);
+  } else {
+if (aclStr.equals(*)) {
+  LOG.info(Whitelist Key ACL for KEY_OP '{}' is set to '*', 
keyOp);
+}
+whitelistKeyAcls.put(keyOp, new AccessControlList(aclStr));
   }
-  whitelistKeyAcls.put(keyOp, new AccessControlList(aclStr));
 }
   }
 }
@@ -271,7 +281,9 @@ public class KMSACLs implements Runnable, KeyACLs {
 
   @Override
   public boolean isACLPresent(String keyName, KeyOpType opType) {
-return (keyAcls.containsKey(keyName) || 
defaultKeyAcls.containsKey(opType));
+return (keyAcls.containsKey(keyName)
+|| defaultKeyAcls.containsKey(opType)
+|| whitelistKeyAcls.containsKey(opType));
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1812241e/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 

[21/29] hadoop git commit: YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne

2014-12-08 Thread vinayakumarb
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b130821
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b130821
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b130821

Branch: refs/heads/HDFS-EC
Commit: 4b130821995a3cfe20c71e38e0f63294085c0491
Parents: 3c72f54
Author: Jason Lowe jl...@apache.org
Authored: Fri Dec 5 21:06:48 2014 +
Committer: Jason Lowe jl...@apache.org
Committed: Fri Dec 5 21:06:48 2014 +

--
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../ProportionalCapacityPreemptionPolicy.java   | 170 +--
 ...estProportionalCapacityPreemptionPolicy.java | 283 ++-
 3 files changed, 424 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b130821/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 9804d61..0b88959 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -126,6 +126,8 @@ Release 2.7.0 - UNRELEASED
 
 YARN-2301. Improved yarn container command. (Naganarasimha G R via jianhe)
 
+YARN-2056. Disable preemption at Queue level (Eric Payne via jlowe)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b130821/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
index 0f48b0c..1a3f804 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
@@ -27,6 +27,7 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.NavigableSet;
+import java.util.PriorityQueue;
 import java.util.Set;
 
 import org.apache.commons.logging.Log;
@@ -111,6 +112,9 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
   public static final String NATURAL_TERMINATION_FACTOR =
   
yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor;
 
+  public static final String BASE_YARN_RM_PREEMPTION = 
yarn.scheduler.capacity.;
+  public static final String SUFFIX_DISABLE_PREEMPTION = .disable_preemption;
+
   // the dispatcher to send preempt and kill events
   public EventHandlerContainerPreemptEvent dispatcher;
 
@@ -192,7 +196,7 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
 // extract a summary of the queues from scheduler
 TempQueue tRoot;
 synchronized (scheduler) {
-  tRoot = cloneQueues(root, clusterResources);
+  tRoot = cloneQueues(root, clusterResources, false);
 }
 
 // compute the ideal distribution of resources among queues
@@ -370,28 +374,60 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
   private void computeFixpointAllocation(ResourceCalculator rc,
   Resource tot_guarant, CollectionTempQueue qAlloc, Resource unassigned, 
   boolean ignoreGuarantee) {
+// Prior to assigning the unused resources, process each queue as follows:
+// If current  guaranteed, idealAssigned = guaranteed + untouchable extra
+// Else idealAssigned = current;
+// Subtract idealAssigned resources from unassigned.
+// If the queue has all of its needs met (that is, if 
+// idealAssigned = current + pending), remove it from consideration.
+// Sort queues from most under-guaranteed to most over-guaranteed.
+TQComparator tqComparator = new TQComparator(rc, tot_guarant);
+PriorityQueueTempQueue orderedByNeed =
+ new PriorityQueueTempQueue(10,tqComparator);
+for (IteratorTempQueue i = qAlloc.iterator(); i.hasNext();) {
+  TempQueue q = i.next();
+  if (Resources.greaterThan(rc, 

[24/29] hadoop git commit: HADOOP-11343. Overflow is not properly handled in caclulating final iv for AES CTR. Contributed by Jerry Chen.

2014-12-08 Thread vinayakumarb
HADOOP-11343. Overflow is not properly handled in caclulating final iv for AES 
CTR. Contributed by Jerry Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0707e4ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0707e4ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0707e4ec

Branch: refs/heads/HDFS-EC
Commit: 0707e4eca906552c960e3b8c4e20d9913145eca6
Parents: e69af83
Author: Andrew Wang w...@apache.org
Authored: Fri Dec 5 18:20:19 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Fri Dec 5 18:20:19 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +
 .../apache/hadoop/crypto/AesCtrCryptoCodec.java | 27 -
 .../apache/hadoop/crypto/TestCryptoCodec.java   | 64 
 3 files changed, 79 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0707e4ec/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7a6a938..965c6d3 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -508,6 +508,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11355. When accessing data in HDFS and the key has been deleted,
 a Null Pointer Exception is shown. (Arun Suresh via wang)
 
+HADOOP-11343. Overflow is not properly handled in caclulating final iv for
+AES CTR. (Jerry Chen via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0707e4ec/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
index 8f8bc66..5e286b9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
@@ -33,7 +33,6 @@ public abstract class AesCtrCryptoCodec extends CryptoCodec {
* @see http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
*/
   private static final int AES_BLOCK_SIZE = SUITE.getAlgorithmBlockSize();
-  private static final int CTR_OFFSET = 8;
 
   @Override
   public CipherSuite getCipherSuite() {
@@ -48,20 +47,18 @@ public abstract class AesCtrCryptoCodec extends CryptoCodec 
{
   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
 Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
 Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
-
-System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
-long l = 0;
-for (int i = 0; i  8; i++) {
-  l = ((l  8) | (initIV[CTR_OFFSET + i]  0xff));
+
+int i = IV.length; // IV length
+int j = 0; // counter bytes index
+int sum = 0;
+while (i--  0) {
+  // (sum  Byte.SIZE) is the carry for addition
+  sum = (initIV[i]  0xff) + (sum  Byte.SIZE);
+  if (j++  8) { // Big-endian, and long is 8 bytes length
+sum += (byte) counter  0xff;
+counter = 8;
+  }
+  IV[i] = (byte) sum;
 }
-l += counter;
-IV[CTR_OFFSET + 0] = (byte) (l  56);
-IV[CTR_OFFSET + 1] = (byte) (l  48);
-IV[CTR_OFFSET + 2] = (byte) (l  40);
-IV[CTR_OFFSET + 3] = (byte) (l  32);
-IV[CTR_OFFSET + 4] = (byte) (l  24);
-IV[CTR_OFFSET + 5] = (byte) (l  16);
-IV[CTR_OFFSET + 6] = (byte) (l  8);
-IV[CTR_OFFSET + 7] = (byte) (l);
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0707e4ec/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
index 79987ce..08231f9 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
@@ -23,7 +23,9 @@ import static org.junit.Assert.assertTrue;
 import java.io.BufferedInputStream;
 import java.io.DataInputStream;
 import java.io.IOException;
+import java.math.BigInteger;
 import java.security.GeneralSecurityException;
+import java.security.SecureRandom;
 import java.util.Arrays;
 import 

[26/29] hadoop git commit: HDFS-7476. Consolidate ACL-related operations to a single class. Contributed by Haohui Mai.

2014-12-08 Thread vinayakumarb
HDFS-7476. Consolidate ACL-related operations to a single class. Contributed by 
Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9297f980
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9297f980
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9297f980

Branch: refs/heads/HDFS-EC
Commit: 9297f980c2de8886ff970946a2513e6890cd5552
Parents: e227fb8
Author: cnauroth cnaur...@apache.org
Authored: Sat Dec 6 14:20:00 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Sat Dec 6 14:20:00 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/server/namenode/AclStorage.java |  33 ---
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java | 244 +++
 .../hdfs/server/namenode/FSDirectory.java   | 158 ++--
 .../hdfs/server/namenode/FSEditLogLoader.java   |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 119 ++---
 .../hdfs/server/namenode/TestAuditLogger.java   |  79 ++
 7 files changed, 318 insertions(+), 320 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9297f980/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 87b02c4..769be43 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -438,6 +438,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7459. Consolidate cache-related implementation in FSNamesystem into
 a single class. (wheat9)
 
+HDFS-7476. Consolidate ACL-related operations to a single class.
+(wheat9 via cnauroth)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9297f980/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
index ac30597..a866046 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
@@ -241,39 +241,6 @@ final class AclStorage {
   }
 
   /**
-   * Completely removes the ACL from an inode.
-   *
-   * @param inode INode to update
-   * @param snapshotId int latest snapshot ID of inode
-   * @throws QuotaExceededException if quota limit is exceeded
-   */
-  public static void removeINodeAcl(INode inode, int snapshotId)
-  throws QuotaExceededException {
-AclFeature f = inode.getAclFeature();
-if (f == null) {
-  return;
-}
-
-FsPermission perm = inode.getFsPermission();
-ListAclEntry featureEntries = getEntriesFromAclFeature(f);
-if (featureEntries.get(0).getScope() == AclEntryScope.ACCESS) {
-  // Restore group permissions from the feature's entry to permission
-  // bits, overwriting the mask, which is not part of a minimal ACL.
-  AclEntry groupEntryKey = new AclEntry.Builder()
-  .setScope(AclEntryScope.ACCESS).setType(AclEntryType.GROUP).build();
-  int groupEntryIndex = Collections.binarySearch(featureEntries,
-  groupEntryKey, AclTransformation.ACL_ENTRY_COMPARATOR);
-  assert groupEntryIndex = 0;
-  FsAction groupPerm = featureEntries.get(groupEntryIndex).getPermission();
-  FsPermission newPerm = new FsPermission(perm.getUserAction(), groupPerm,
-  perm.getOtherAction(), perm.getStickyBit());
-  inode.setPermission(newPerm, snapshotId);
-}
-
-inode.removeAclFeature(snapshotId);
-  }
-
-  /**
* Updates an inode with a new ACL.  This method takes a full logical ACL and
* stores the entries to the inode's {@link FsPermission} and
* {@link AclFeature}.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9297f980/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
new file mode 100644
index 000..ac899aa
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
@@ -0,0 +1,244 @@
+/**
+ * Licensed to the Apache Software 

[20/29] hadoop git commit: YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration. (rchiang via rkanter)

2014-12-08 Thread vinayakumarb
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c72f54e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c72f54e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c72f54e

Branch: refs/heads/HDFS-EC
Commit: 3c72f54ef581b4f3e2eb84e1e24e459c38d3f769
Parents: 9cdaec6
Author: Robert Kanter rkan...@apache.org
Authored: Fri Dec 5 12:07:01 2014 -0800
Committer: Robert Kanter rkan...@apache.org
Committed: Fri Dec 5 12:07:41 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c72f54e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 252b7d5..9804d61 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -189,6 +189,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute
 any further apps. (Naganarasimha G R via kasha)
 
+YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in
+YarnConfiguration. (rchiang via rkanter)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c72f54e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index f0f88d8..10ba832 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -819,7 +819,7 @@ public class YarnConfiguration extends Configuration {
   public static final String NM_CONTAINER_MON_PROCESS_TREE =
 NM_PREFIX + container-monitor.process-tree.class;
   public static final String PROCFS_USE_SMAPS_BASED_RSS_ENABLED = NM_PREFIX +
-  .container-monitor.procfs-tree.smaps-based-rss.enabled;
+  container-monitor.procfs-tree.smaps-based-rss.enabled;
   public static final boolean DEFAULT_PROCFS_USE_SMAPS_BASED_RSS_ENABLED =
   false;
   



[28/29] hadoop git commit: YARN-2927. [YARN-1492] InMemorySCMStore properties are inconsistent. (Ray Chiang via kasha)

2014-12-08 Thread vinayakumarb
YARN-2927. [YARN-1492] InMemorySCMStore properties are inconsistent. (Ray 
Chiang via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/120e1dec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/120e1dec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/120e1dec

Branch: refs/heads/HDFS-EC
Commit: 120e1decd7f6861e753269690d454cb14c240857
Parents: 1b3bb9e
Author: Karthik Kambatla ka...@apache.org
Authored: Sun Dec 7 22:28:26 2014 -0800
Committer: Karthik Kambatla ka...@apache.org
Committed: Sun Dec 7 22:28:26 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/120e1dec/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0d7a843..43b19ec 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -197,6 +197,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2869. CapacityScheduler should trim sub queue names when parse
 configuration. (Wangda Tan via jianhe)
 
+YARN-2927. [YARN-1492] InMemorySCMStore properties are inconsistent. 
+(Ray Chiang via kasha)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/120e1dec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 10ba832..55073c5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1416,7 +1416,7 @@ public class YarnConfiguration extends Configuration {
   // In-memory SCM store configuration
   
   public static final String IN_MEMORY_STORE_PREFIX =
-  SHARED_CACHE_PREFIX + in-memory.;
+  SCM_STORE_PREFIX + in-memory.;
 
   /**
* A resource in the InMemorySCMStore is considered stale if the time since



[16/29] hadoop git commit: HADOOP-11356. Removed deprecated o.a.h.fs.permission.AccessControlException. Contributed by Li Lu.

2014-12-08 Thread vinayakumarb
HADOOP-11356. Removed deprecated o.a.h.fs.permission.AccessControlException. 
Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2829b7a9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2829b7a9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2829b7a9

Branch: refs/heads/HDFS-EC
Commit: 2829b7a96ffe6d2ca5e81689c7957e4e97042f2d
Parents: 0653918
Author: Haohui Mai whe...@apache.org
Authored: Fri Dec 5 10:49:43 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Fri Dec 5 10:49:43 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +
 .../fs/permission/AccessControlException.java   | 66 
 .../hadoop/security/AccessControlException.java |  4 +-
 3 files changed, 5 insertions(+), 68 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2829b7a9/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index f2a086e..2f88fc8 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -410,6 +410,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11301. [optionally] update jmx cache to drop old metrics
 (Maysam Yabandeh via stack)
 
+HADOOP-11356. Removed deprecated 
o.a.h.fs.permission.AccessControlException.
+(Li Lu via wheat9)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2829b7a9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
deleted file mode 100644
index 1cd6395..000
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.fs.permission;
-
-import java.io.IOException;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-
-/**
- * An exception class for access control related issues.
- * @deprecated Use {@link org.apache.hadoop.security.AccessControlException} 
- * instead.
- */
-@Deprecated
-@InterfaceAudience.Public
-@InterfaceStability.Stable
-public class AccessControlException extends IOException {
-  //Required by {@link java.io.Serializable}.
-  private static final long serialVersionUID = 1L;
-
-  /**
-   * Default constructor is needed for unwrapping from 
-   * {@link org.apache.hadoop.ipc.RemoteException}.
-   */
-  public AccessControlException() {
-super(Permission denied.);
-  }
-
-  /**
-   * Constructs an {@link AccessControlException}
-   * with the specified detail message.
-   * @param s the detail message.
-   */
-  public AccessControlException(String s) {
-super(s);
-  }
-  
-  /**
-   * Constructs a new exception with the specified cause and a detail
-   * message of tt(cause==null ? null : cause.toString())/tt (which
-   * typically contains the class and detail message of ttcause/tt).
-   * @param  cause the cause (which is saved for later retrieval by the
-   * {@link #getCause()} method).  (A ttnull/tt value is
-   * permitted, and indicates that the cause is nonexistent or
-   * unknown.)
-   */
-  public AccessControlException(Throwable cause) {
-super(cause);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2829b7a9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AccessControlException.java

[03/29] hadoop git commit: MAPREDUCE-5932. Provide an option to use a dedicated reduce-side shuffle log. Contributed by Gera Shegalov

2014-12-08 Thread vinayakumarb
MAPREDUCE-5932. Provide an option to use a dedicated reduce-side shuffle log. 
Contributed by Gera Shegalov


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03ab24aa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03ab24aa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03ab24aa

Branch: refs/heads/HDFS-EC
Commit: 03ab24aa01ffea1cacf1fa9cbbf73c3f2904d981
Parents: 22afae8
Author: Jason Lowe jl...@apache.org
Authored: Wed Dec 3 17:02:14 2014 +
Committer: Jason Lowe jl...@apache.org
Committed: Wed Dec 3 17:02:14 2014 +

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +
 .../apache/hadoop/mapred/MapReduceChildJVM.java | 34 +
 .../v2/app/job/impl/TestMapReduceChildJVM.java  | 71 -
 .../apache/hadoop/mapreduce/v2/util/MRApps.java | 80 +---
 .../apache/hadoop/mapred/FileOutputFormat.java  |  4 +-
 .../java/org/apache/hadoop/mapred/TaskLog.java  |  4 +
 .../apache/hadoop/mapreduce/MRJobConfig.java| 14 
 .../src/main/resources/mapred-default.xml   | 28 +++
 .../org/apache/hadoop/mapred/YARNRunner.java|  9 +--
 .../hadoop/yarn/ContainerLogAppender.java   | 11 ++-
 .../yarn/ContainerRollingLogAppender.java   | 11 ++-
 .../hadoop/yarn/TestContainerLogAppender.java   |  1 +
 .../main/resources/container-log4j.properties   | 29 ++-
 13 files changed, 243 insertions(+), 56 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03ab24aa/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 5417c3e..3f34acd 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -235,6 +235,9 @@ Release 2.7.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-5932. Provide an option to use a dedicated reduce-side shuffle
+log (Gera Shegalov via jlowe)
+
   OPTIMIZATIONS
 
 MAPREDUCE-6169. MergeQueue should release reference to the current item 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/03ab24aa/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
index c790c57..817b3a5 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
@@ -20,16 +20,14 @@ package org.apache.hadoop.mapred;
 
 import java.net.InetSocketAddress;
 import java.util.HashMap;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Vector;
 
-import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.mapred.TaskLog.LogName;
-import org.apache.hadoop.mapreduce.ID;
 import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.TypeConverter;
 import org.apache.hadoop.mapreduce.v2.util.MRApps;
 import org.apache.hadoop.yarn.api.ApplicationConstants;
 import org.apache.hadoop.yarn.api.ApplicationConstants.Environment;
@@ -52,20 +50,6 @@ public class MapReduceChildJVM {
 jobConf.get(JobConf.MAPRED_TASK_ENV));
   }
 
-  private static String getChildLogLevel(JobConf conf, boolean isMap) {
-if (isMap) {
-  return conf.get(
-  MRJobConfig.MAP_LOG_LEVEL, 
-  JobConf.DEFAULT_LOG_LEVEL.toString()
-  );
-} else {
-  return conf.get(
-  MRJobConfig.REDUCE_LOG_LEVEL, 
-  JobConf.DEFAULT_LOG_LEVEL.toString()
-  );
-}
-  }
-  
   public static void setVMEnv(MapString, String environment,
   Task task) {
 
@@ -79,7 +63,7 @@ public class MapReduceChildJVM {
 // streaming) it will have the correct loglevel.
 environment.put(
 HADOOP_ROOT_LOGGER, 
-getChildLogLevel(conf, task.isMapTask()) + ,console);
+MRApps.getChildLogLevel(conf, task.isMapTask()) + ,console);
 
 // TODO: The following is useful for instance in streaming tasks. Should be
 // set in ApplicationMaster's env by the RM.
@@ -147,15 +131,6 @@ public class MapReduceChildJVM {
 return adminClasspath +   + userClasspath;
   }
 
-  private static void setupLog4jProperties(Task task,
-  VectorString 

[09/29] hadoop git commit: HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos TGT is available in the subject. Contributed by Dian Fu.

2014-12-08 Thread vinayakumarb
HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos 
TGT is available in the subject. Contributed by Dian Fu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9d1a8f58
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9d1a8f58
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9d1a8f58

Branch: refs/heads/HDFS-EC
Commit: 9d1a8f5897d585bec96de32116fbd2118f8e0f95
Parents: 73fbb3c
Author: Aaron T. Myers a...@apache.org
Authored: Wed Dec 3 18:53:45 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Dec 3 18:53:45 2014 -0800

--
 .../security/authentication/client/KerberosAuthenticator.java  | 6 +-
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 2 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d1a8f58/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
index e4ebf1b..928866c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
@@ -23,6 +23,8 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosKey;
+import javax.security.auth.kerberos.KerberosTicket;
 import javax.security.auth.login.AppConfigurationEntry;
 import javax.security.auth.login.Configuration;
 import javax.security.auth.login.LoginContext;
@@ -247,7 +249,9 @@ public class KerberosAuthenticator implements Authenticator 
{
 try {
   AccessControlContext context = AccessController.getContext();
   Subject subject = Subject.getSubject(context);
-  if (subject == null) {
+  if (subject == null
+  || (subject.getPrivateCredentials(KerberosKey.class).isEmpty()
+   
subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) {
 LOG.debug(No subject in context, logging in);
 subject = new Subject();
 LoginContext login = new LoginContext(, subject,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d1a8f58/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7a2159f..f53bceb 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -496,6 +496,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL
 and whitelist key ACL. (Dian Fu via wang)
 
+HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if
+kerberos TGT is available in the subject. (Dian Fu via atm)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[14/29] hadoop git commit: YARN-2189. [YARN-1492] Admin service for cache manager. (Chris Trezzo via kasha)

2014-12-08 Thread vinayakumarb
YARN-2189. [YARN-1492] Admin service for cache manager. (Chris Trezzo via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/78968155
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/78968155
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/78968155

Branch: refs/heads/HDFS-EC
Commit: 78968155d7f87f2147faf96c5eef9c23dba38db8
Parents: 26d8dec
Author: Karthik Kambatla ka...@apache.org
Authored: Thu Dec 4 17:36:32 2014 -0800
Committer: Karthik Kambatla ka...@apache.org
Committed: Thu Dec 4 17:36:32 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 hadoop-yarn-project/hadoop-yarn/bin/yarn|   5 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |   1 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  12 ++
 .../yarn/server/api/SCMAdminProtocol.java   |  53 ++
 .../yarn/server/api/SCMAdminProtocolPB.java |  31 
 .../RunSharedCacheCleanerTaskRequest.java   |  37 
 .../RunSharedCacheCleanerTaskResponse.java  |  58 ++
 .../main/proto/server/SCM_Admin_protocol.proto  |  29 +++
 .../src/main/proto/yarn_service_protos.proto|  11 ++
 .../org/apache/hadoop/yarn/client/SCMAdmin.java | 183 +++
 .../pb/client/SCMAdminProtocolPBClientImpl.java |  73 
 .../service/SCMAdminProtocolPBServiceImpl.java  |  57 ++
 .../RunSharedCacheCleanerTaskRequestPBImpl.java |  53 ++
 ...RunSharedCacheCleanerTaskResponsePBImpl.java |  66 +++
 .../src/main/resources/yarn-default.xml |  12 ++
 .../SCMAdminProtocolService.java| 146 +++
 .../sharedcachemanager/SharedCacheManager.java  |   8 +
 .../TestSCMAdminProtocolService.java| 135 ++
 19 files changed, 973 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/78968155/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f032b4f..252b7d5 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -54,6 +54,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2188. [YARN-1492] Client service for cache manager. 
 (Chris Trezzo and Sangjin Lee via kasha)
 
+YARN-2189. [YARN-1492] Admin service for cache manager.
+(Chris Trezzo via kasha)
+
 YARN-2765. Added leveldb-based implementation for RMStateStore. (Jason Lowe
 via jianhe)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/78968155/hadoop-yarn-project/hadoop-yarn/bin/yarn
--
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn
index b98f344..dfa27e4 100644
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn
@@ -36,6 +36,7 @@ function hadoop_usage
   echo   resourcemanager -format-state-store   deletes the RMStateStore
   echo   rmadmin   admin tools
   echo   sharedcachemanagerrun the SharedCacheManager 
daemon
+  echo   scmadmin  SharedCacheManager admin tools
   echo   timelineserverrun the timeline server
   echo   version   print the version
   echo  or
@@ -162,6 +163,10 @@ case ${COMMAND} in
 CLASS='org.apache.hadoop.yarn.server.sharedcachemanager.SharedCacheManager'
 YARN_OPTS=$YARN_OPTS $YARN_SHAREDCACHEMANAGER_OPTS
   ;;
+  scmadmin)
+CLASS='org.apache.hadoop.yarn.client.SCMAdmin'
+YARN_OPTS=$YARN_OPTS $YARN_CLIENT_OPTS
+  ;;
   version)
 CLASS=org.apache.hadoop.util.VersionInfo
 hadoop_debug Append YARN_CLIENT_OPTS onto YARN_OPTS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/78968155/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index 5e2278d..a763d39 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
@@ -97,6 +97,7 @@
   includeapplication_history_client.proto/include
   includeserver/application_history_server.proto/include
   includeclient_SCM_protocol.proto/include
+  includeserver/SCM_Admin_protocol.proto/include
 /includes
   /source
   
output${project.build.directory}/generated-sources/java/output


[11/29] hadoop git commit: HADOOP-11348. Remove unused variable from CMake error message for finding openssl (Dian Fu via Colin P. McCabe)

2014-12-08 Thread vinayakumarb
HADOOP-11348. Remove unused variable from CMake error message for finding 
openssl (Dian Fu via Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/565b0e60
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/565b0e60
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/565b0e60

Branch: refs/heads/HDFS-EC
Commit: 565b0e60a8fc4ae5bc0083cc6a6ddb2d01952f32
Parents: 1bbcc3d
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Thu Dec 4 12:51:42 2014 -0800
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Thu Dec 4 12:52:39 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 hadoop-common-project/hadoop-common/src/CMakeLists.txt | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/565b0e60/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index f53bceb..f2a086e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -499,6 +499,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if
 kerberos TGT is available in the subject. (Dian Fu via atm)
 
+HADOOP-11348. Remove unused variable from CMake error message for finding
+openssl (Dian Fu via Colin P. McCabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/565b0e60/hadoop-common-project/hadoop-common/src/CMakeLists.txt
--
diff --git a/hadoop-common-project/hadoop-common/src/CMakeLists.txt 
b/hadoop-common-project/hadoop-common/src/CMakeLists.txt
index b8ac460..29fa2b8 100644
--- a/hadoop-common-project/hadoop-common/src/CMakeLists.txt
+++ b/hadoop-common-project/hadoop-common/src/CMakeLists.txt
@@ -202,7 +202,7 @@ if (USABLE_OPENSSL)
 ${D}/crypto/OpensslCipher.c
 ${D}/crypto/random/OpensslSecureRandom.c)
 else (USABLE_OPENSSL)
-MESSAGE(Cannot find a usable OpenSSL library.  
OPENSSL_LIBRARY=${OPENSSL_LIBRARY}, OPENSSL_INCLUDE_DIR=${OPENSSL_INCLUDE_DIR}, 
CUSTOM_OPENSSL_INCLUDE_DIR=${CUSTOM_OPENSSL_INCLUDE_DIR}, 
CUSTOM_OPENSSL_PREFIX=${CUSTOM_OPENSSL_PREFIX}, 
CUSTOM_OPENSSL_INCLUDE=${CUSTOM_OPENSSL_INCLUDE})
+MESSAGE(Cannot find a usable OpenSSL library.  
OPENSSL_LIBRARY=${OPENSSL_LIBRARY}, OPENSSL_INCLUDE_DIR=${OPENSSL_INCLUDE_DIR}, 
CUSTOM_OPENSSL_LIB=${CUSTOM_OPENSSL_LIB}, 
CUSTOM_OPENSSL_PREFIX=${CUSTOM_OPENSSL_PREFIX}, 
CUSTOM_OPENSSL_INCLUDE=${CUSTOM_OPENSSL_INCLUDE})
 IF(REQUIRE_OPENSSL)
 MESSAGE(FATAL_ERROR Terminating build because require.openssl was 
specified.)
 ENDIF(REQUIRE_OPENSSL)



[01/29] hadoop git commit: YARN-1156. Enhance NodeManager AllocatedGB and AvailableGB metrics for aggregation of decimal values. (Contributed by Tsuyoshi OZAWA)

2014-12-08 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-EC bdc01015c - 833185345


YARN-1156. Enhance NodeManager AllocatedGB and AvailableGB metrics for 
aggregation of decimal values. (Contributed by Tsuyoshi OZAWA)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e65b7c5f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e65b7c5f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e65b7c5f

Branch: refs/heads/HDFS-EC
Commit: e65b7c5ff6b0c013e510e750fe5cf59acfefea5f
Parents: 7caa3bc
Author: Junping Du junping...@apache.org
Authored: Wed Dec 3 04:11:18 2014 -0800
Committer: Junping Du junping...@apache.org
Committed: Wed Dec 3 04:11:18 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt  |  3 +++
 .../nodemanager/metrics/NodeManagerMetrics.java  | 19 ++-
 .../metrics/TestNodeManagerMetrics.java  | 17 -
 3 files changed, 29 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e65b7c5f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 1e336b7..421e5ea 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -112,6 +112,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2136. Changed RMStateStore to ignore store opearations when fenced.
 (Varun Saxena via jianhe)
 
+YARN-1156. Enhance NodeManager AllocatedGB and AvailableGB metrics 
+for aggregation of decimal values. (Tsuyoshi OZAWA via junping_du)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e65b7c5f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
index a3637d5..beaafe1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
@@ -47,6 +47,9 @@ public class NodeManagerMetrics {
   @Metric(Container launch duration)
   MutableRate containerLaunchDuration;
 
+  private long allocatedMB;
+  private long availableMB;
+
   public static NodeManagerMetrics create() {
 return create(DefaultMetricsSystem.instance());
   }
@@ -92,22 +95,27 @@ public class NodeManagerMetrics {
 
   public void allocateContainer(Resource res) {
 allocatedContainers.incr();
-allocatedGB.incr(res.getMemory() / 1024);
-availableGB.decr(res.getMemory() / 1024);
+allocatedMB = allocatedMB + res.getMemory();
+allocatedGB.set((int)Math.ceil(allocatedMB/1024d));
+availableMB = availableMB - res.getMemory();
+availableGB.set((int)Math.floor(availableMB/1024d));
 allocatedVCores.incr(res.getVirtualCores());
 availableVCores.decr(res.getVirtualCores());
   }
 
   public void releaseContainer(Resource res) {
 allocatedContainers.decr();
-allocatedGB.decr(res.getMemory() / 1024);
-availableGB.incr(res.getMemory() / 1024);
+allocatedMB = allocatedMB - res.getMemory();
+allocatedGB.set((int)Math.ceil(allocatedMB/1024d));
+availableMB = availableMB + res.getMemory();
+availableGB.set((int)Math.floor(availableMB/1024d));
 allocatedVCores.decr(res.getVirtualCores());
 availableVCores.incr(res.getVirtualCores());
   }
 
   public void addResource(Resource res) {
-availableGB.incr(res.getMemory() / 1024);
+availableMB = availableMB + res.getMemory();
+availableGB.incr((int)Math.floor(availableMB/1024d));
 availableVCores.incr(res.getVirtualCores());
   }
 
@@ -118,4 +126,5 @@ public class NodeManagerMetrics {
   public int getRunningContainers() {
 return containersRunning.value();
   }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e65b7c5f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
--
diff --git 

[25/29] hadoop git commit: HDFS-7459. Consolidate cache-related implementation in FSNamesystem into a single class. Contributed by Haohui Mai.

2014-12-08 Thread vinayakumarb
HDFS-7459. Consolidate cache-related implementation in FSNamesystem into a 
single class. Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e227fb8f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e227fb8f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e227fb8f

Branch: refs/heads/HDFS-EC
Commit: e227fb8fbcd414717faded9454b8ef813f7aafea
Parents: 0707e4e
Author: Haohui Mai whe...@apache.org
Authored: Fri Dec 5 18:35:45 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Fri Dec 5 18:37:07 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hdfs/server/namenode/FSNDNCacheOp.java  | 124 
 .../hdfs/server/namenode/FSNamesystem.java  | 140 ++-
 3 files changed, 173 insertions(+), 94 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e227fb8f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d4db732..87b02c4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -435,6 +435,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7474. Avoid resolving path in FSPermissionChecker. (jing9)
 
+HDFS-7459. Consolidate cache-related implementation in FSNamesystem into
+a single class. (wheat9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e227fb8f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
new file mode 100644
index 000..093ee74
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
@@ -0,0 +1,124 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedListEntries;
+import org.apache.hadoop.fs.CacheFlag;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
+import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
+import org.apache.hadoop.hdfs.protocol.CachePoolInfo;
+import org.apache.hadoop.security.AccessControlException;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+class FSNDNCacheOp {
+  static CacheDirectiveInfo addCacheDirective(
+  FSNamesystem fsn, CacheManager cacheManager,
+  CacheDirectiveInfo directive, EnumSetCacheFlag flags,
+  boolean logRetryCache)
+  throws IOException {
+
+final FSPermissionChecker pc = getFsPermissionChecker(fsn);
+
+if (directive.getId() != null) {
+  throw new IOException(addDirective: you cannot specify an ID  +
+  for this operation.);
+}
+CacheDirectiveInfo effectiveDirective =
+cacheManager.addDirective(directive, pc, flags);
+fsn.getEditLog().logAddCacheDirectiveInfo(effectiveDirective,
+logRetryCache);
+return effectiveDirective;
+  }
+
+  static void modifyCacheDirective(
+  FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
directive,
+  EnumSetCacheFlag flags, boolean logRetryCache) throws IOException {
+final FSPermissionChecker pc = getFsPermissionChecker(fsn);
+
+cacheManager.modifyDirective(directive, pc, flags);
+fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
+  }
+
+  static void removeCacheDirective(
+  FSNamesystem fsn, CacheManager cacheManager, long id,
+  boolean logRetryCache)
+  throws IOException {
+

[19/29] hadoop git commit: HADOOP-11355. When accessing data in HDFS and the key has been deleted, a Null Pointer Exception is shown. Contributed by Arun Suresh.

2014-12-08 Thread vinayakumarb
HADOOP-11355. When accessing data in HDFS and the key has been deleted, a Null 
Pointer Exception is shown. Contributed by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9cdaec6a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9cdaec6a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9cdaec6a

Branch: refs/heads/HDFS-EC
Commit: 9cdaec6a6f6cb1680ad6e44d7b0c8d70cdcbe3fa
Parents: f6452eb
Author: Andrew Wang w...@apache.org
Authored: Fri Dec 5 12:01:23 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Fri Dec 5 12:01:23 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt  | 3 +++
 .../crypto/key/kms/server/KeyAuthorizationKeyProvider.java   | 4 
 .../org/apache/hadoop/crypto/key/kms/server/TestKMS.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9cdaec6a/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2f88fc8..7a6a938 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -505,6 +505,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11348. Remove unused variable from CMake error message for finding
 openssl (Dian Fu via Colin P. McCabe)
 
+HADOOP-11355. When accessing data in HDFS and the key has been deleted,
+a Null Pointer Exception is shown. (Arun Suresh via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9cdaec6a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
index 4ce9611..074f1fb 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
@@ -240,6 +240,10 @@ public class KeyAuthorizationKeyProvider extends 
KeyProviderCryptoExtension {
 String kn = ekv.getEncryptionKeyName();
 String kvn = ekv.getEncryptionKeyVersionName();
 KeyVersion kv = provider.getKeyVersion(kvn);
+if (kv == null) {
+  throw new IllegalArgumentException(String.format(
+  '%s' not found, kvn));
+}
 if (!kv.getName().equals(kn)) {
   throw new IllegalArgumentException(String.format(
   KeyVersion '%s' does not belong to the key '%s', kvn, kn));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9cdaec6a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
index b9409ca..61ce807 100644
--- 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
@@ -498,6 +498,14 @@ public class TestKMS {
 // deleteKey()
 kp.deleteKey(k1);
 
+// Check decryption after Key deletion
+try {
+  kpExt.decryptEncryptedKey(ek1);
+  Assert.fail(Should not be allowed !!);
+} catch (Exception e) {
+  Assert.assertTrue(e.getMessage().contains('k1@1' not found));
+}
+
 // getKey()
 Assert.assertNull(kp.getKeyVersion(k1));
 



[22/29] hadoop git commit: HDFS-7474. Avoid resolving path in FSPermissionChecker. Contributed by Jing Zhao.

2014-12-08 Thread vinayakumarb
HDFS-7474. Avoid resolving path in FSPermissionChecker. Contributed by Jing 
Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/475c6b49
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/475c6b49
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/475c6b49

Branch: refs/heads/HDFS-EC
Commit: 475c6b4978045d55d1ebcea69cc9a2f24355aca2
Parents: 4b13082
Author: Jing Zhao ji...@apache.org
Authored: Fri Dec 5 14:17:17 2014 -0800
Committer: Jing Zhao ji...@apache.org
Committed: Fri Dec 5 14:17:17 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   2 +
 .../server/namenode/EncryptionZoneManager.java  |   4 +-
 .../hdfs/server/namenode/FSDirConcatOp.java |   9 +-
 .../hdfs/server/namenode/FSDirMkdirOp.java  |  17 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |  20 +-
 .../hdfs/server/namenode/FSDirSnapshotOp.java   |  48 ++--
 .../server/namenode/FSDirStatAndListingOp.java  |  35 +--
 .../hdfs/server/namenode/FSDirectory.java   |  99 +++
 .../hdfs/server/namenode/FSEditLogLoader.java   |  16 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 270 ---
 .../server/namenode/FSPermissionChecker.java|  22 +-
 .../hdfs/server/namenode/INodesInPath.java  |  21 +-
 .../namenode/snapshot/SnapshotManager.java  |  50 ++--
 .../namenode/TestFSPermissionChecker.java   |  10 +-
 .../server/namenode/TestSnapshotPathINodes.java |  20 +-
 .../namenode/snapshot/TestSnapshotManager.java  |  14 +-
 16 files changed, 295 insertions(+), 362 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/475c6b49/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 22f462f..d4db732 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -433,6 +433,8 @@ Release 2.7.0 - UNRELEASED
 HDFS-7478. Move org.apache.hadoop.hdfs.server.namenode.NNConf to
 FSNamesystem. (Li Lu via wheat9)
 
+HDFS-7474. Avoid resolving path in FSPermissionChecker. (jing9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/475c6b49/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 0d7ced9..135979f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -285,12 +285,12 @@ public class EncryptionZoneManager {
   CryptoProtocolVersion version, String keyName)
   throws IOException {
 assert dir.hasWriteLock();
-if (dir.isNonEmptyDirectory(src)) {
+final INodesInPath srcIIP = dir.getINodesInPath4Write(src, false);
+if (dir.isNonEmptyDirectory(srcIIP)) {
   throw new IOException(
   Attempt to create an encryption zone for a non-empty directory.);
 }
 
-final INodesInPath srcIIP = dir.getINodesInPath4Write(src, false);
 if (srcIIP != null 
 srcIIP.getLastINode() != null 
 !srcIIP.getLastINode().isDirectory()) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/475c6b49/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
index 12feb33..c2e0f08 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
@@ -53,15 +53,17 @@ class FSDirConcatOp {
   }
 }
 
+final INodesInPath trgIip = fsd.getINodesInPath4Write(target);
 // write permission for the target
 if (fsd.isPermissionEnabled()) {
   FSPermissionChecker pc = fsd.getPermissionChecker();
-  fsd.checkPathAccess(pc, target, FsAction.WRITE);
+  fsd.checkPathAccess(pc, trgIip, FsAction.WRITE);
 
   // and srcs
   for(String aSrc: 

hadoop git commit: MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about ssl-client.xml. Contributed by Yangping Wu. (harsh)

2014-12-08 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 120e1decd - 8963515b8


MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about 
ssl-client.xml. Contributed by Yangping Wu. (harsh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8963515b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8963515b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8963515b

Branch: refs/heads/trunk
Commit: 8963515b880b78068791f11abe4f5df332553be1
Parents: 120e1de
Author: Harsh J ha...@cloudera.com
Authored: Mon Dec 8 15:57:52 2014 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Mon Dec 8 15:57:52 2014 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/site/apt/EncryptedShuffle.apt.vm  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8963515b/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 3f34acd..c757d40 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -246,6 +246,9 @@ Release 2.7.0 - UNRELEASED
 
   BUG FIXES
 
+MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about
+ssl-client.xml (Yangping Wu via harsh)
+
 MAPREDUCE-5918. LineRecordReader can return the same decompressor to
 CodecPool multiple times (Sergey Murylev via raviprak)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8963515b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
index da412df..68e569d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
@@ -202,7 +202,7 @@ Hadoop MapReduce Next Generation - Encrypted Shuffle
 
 ** ssl-client.xml (Reducer/Fetcher) Configuration:
 
-  The mapred user should own the ssl-server.xml file and it should have
+  The mapred user should own the ssl-client.xml file and it should have
   default permissions.
 
 
*-+-+-+



hadoop git commit: MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about ssl-client.xml. Contributed by Yangping Wu. (harsh)

2014-12-08 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d02cb9c51 - bb1fedfbc


MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about 
ssl-client.xml. Contributed by Yangping Wu. (harsh)

(cherry picked from commit 8963515b880b78068791f11abe4f5df332553be1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bb1fedfb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bb1fedfb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bb1fedfb

Branch: refs/heads/branch-2
Commit: bb1fedfbc36411b1d3f63bcfac05028e1b6c2eb2
Parents: d02cb9c
Author: Harsh J ha...@cloudera.com
Authored: Mon Dec 8 15:57:52 2014 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Mon Dec 8 16:00:12 2014 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/site/apt/EncryptedShuffle.apt.vm  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb1fedfb/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index e0969e4..bccb616 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -19,6 +19,9 @@ Release 2.7.0 - UNRELEASED
 
   BUG FIXES
 
+MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about
+ssl-client.xml (Yangping Wu via harsh)
+
 MAPREDUCE-5918. LineRecordReader can return the same decompressor to
 CodecPool multiple times (Sergey Murylev via raviprak)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb1fedfb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
index da412df..68e569d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
@@ -202,7 +202,7 @@ Hadoop MapReduce Next Generation - Encrypted Shuffle
 
 ** ssl-client.xml (Reducer/Fetcher) Configuration:
 
-  The mapred user should own the ssl-server.xml file and it should have
+  The mapred user should own the ssl-client.xml file and it should have
   default permissions.
 
 
*-+-+-+



[1/2] hadoop git commit: HADOOP-10530 Make hadoop build on Java7+ only (stevel)

2014-12-08 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 bb1fedfbc - 275561d84
  refs/heads/trunk 8963515b8 - 144da2e46


HADOOP-10530 Make hadoop build on Java7+ only (stevel)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/275561d8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/275561d8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/275561d8

Branch: refs/heads/branch-2
Commit: 275561d8488fda9a2735b29f5396d8b6140ffa19
Parents: bb1fedf
Author: Steve Loughran ste...@apache.org
Authored: Mon Dec 8 15:30:34 2014 +
Committer: Steve Loughran ste...@apache.org
Committed: Mon Dec 8 15:30:34 2014 +

--
 BUILDING.txt |  4 ++--
 hadoop-assemblies/pom.xml|  4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 17 -
 hadoop-common-project/hadoop-common/CHANGES.txt  |  2 ++
 hadoop-project/pom.xml   | 19 +++
 pom.xml  |  2 +-
 6 files changed, 22 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/275561d8/BUILDING.txt
--
diff --git a/BUILDING.txt b/BUILDING.txt
index 06bef1f..94cbe5e 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -4,7 +4,7 @@ Build instructions for Hadoop
 Requirements:
 
 * Unix System
-* JDK 1.6+
+* JDK 1.7+
 * Maven 3.0 or later
 * Findbugs 1.3.9 (if running findbugs)
 * ProtocolBuffer 2.5.0
@@ -204,7 +204,7 @@ Building on Windows
 Requirements:
 
 * Windows System
-* JDK 1.6+
+* JDK 1.7+
 * Maven 3.0 or later
 * Findbugs 1.3.9 (if running findbugs)
 * ProtocolBuffer 2.5.0

http://git-wip-us.apache.org/repos/asf/hadoop/blob/275561d8/hadoop-assemblies/pom.xml
--
diff --git a/hadoop-assemblies/pom.xml b/hadoop-assemblies/pom.xml
index da2d0b6..5f0e226 100644
--- a/hadoop-assemblies/pom.xml
+++ b/hadoop-assemblies/pom.xml
@@ -45,10 +45,10 @@
 configuration
   rules
 requireMavenVersion
-  version[3.0.0,)/version
+  version${enforced.maven.version}/version
 /requireMavenVersion
 requireJavaVersion
-  version1.6/version
+  version${enforced.java.version}/version
 /requireJavaVersion
   /rules
 /configuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/275561d8/hadoop-common-project/hadoop-annotations/pom.xml
--
diff --git a/hadoop-common-project/hadoop-annotations/pom.xml 
b/hadoop-common-project/hadoop-annotations/pom.xml
index 2d6255a..1221001 100644
--- a/hadoop-common-project/hadoop-annotations/pom.xml
+++ b/hadoop-common-project/hadoop-annotations/pom.xml
@@ -40,23 +40,6 @@
 
   profiles
 profile
-  idos.linux/id
-  activation
-os
-  family!Mac/family
-/os
-  /activation
-  dependencies
-dependency
-  groupIdjdk.tools/groupId
-  artifactIdjdk.tools/artifactId
-  version1.6/version
-  scopesystem/scope
-  systemPath${java.home}/../lib/tools.jar/systemPath
-/dependency
-  /dependencies
-/profile
-profile
   idjdk1.7/id
   activation
 jdk1.7/jdk

http://git-wip-us.apache.org/repos/asf/hadoop/blob/275561d8/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 6db9eec..617dfbb 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -4,6 +4,8 @@ Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES
 
+HADOOP-10530 Make hadoop build on Java7+ only (stevel)
+
   NEW FEATURES
 
 HADOOP-10987. Provide an iterator-based listing API for FileSystem (kihwal)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/275561d8/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index cd90448..76b8645 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -73,6 +73,17 @@
 zookeeper.version3.4.6/zookeeper.version
 
 tomcat.version6.0.41/tomcat.version
+
+!-- define the Java language version used by the compiler --
+javac.version1.7/javac.version
+
+!-- The java version enforced by the maven enforcer --
+!-- more complex patterns can be used here, such as
+   [${javac.version})
+for an open-ended enforcement
+--
+

[2/2] hadoop git commit: HADOOP-10530 Make hadoop build on Java7+ only (stevel)

2014-12-08 Thread stevel
HADOOP-10530 Make hadoop build on Java7+ only (stevel)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/144da2e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/144da2e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/144da2e4

Branch: refs/heads/trunk
Commit: 144da2e4656703751c48875b4ed34975d106edaa
Parents: 8963515
Author: Steve Loughran ste...@apache.org
Authored: Mon Dec 8 15:30:34 2014 +
Committer: Steve Loughran ste...@apache.org
Committed: Mon Dec 8 15:31:00 2014 +

--
 BUILDING.txt |  4 ++--
 hadoop-assemblies/pom.xml|  4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 17 -
 hadoop-common-project/hadoop-common/CHANGES.txt  |  2 ++
 hadoop-project/pom.xml   | 19 +++
 pom.xml  |  2 +-
 6 files changed, 22 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/BUILDING.txt
--
diff --git a/BUILDING.txt b/BUILDING.txt
index 06bef1f..94cbe5e 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -4,7 +4,7 @@ Build instructions for Hadoop
 Requirements:
 
 * Unix System
-* JDK 1.6+
+* JDK 1.7+
 * Maven 3.0 or later
 * Findbugs 1.3.9 (if running findbugs)
 * ProtocolBuffer 2.5.0
@@ -204,7 +204,7 @@ Building on Windows
 Requirements:
 
 * Windows System
-* JDK 1.6+
+* JDK 1.7+
 * Maven 3.0 or later
 * Findbugs 1.3.9 (if running findbugs)
 * ProtocolBuffer 2.5.0

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-assemblies/pom.xml
--
diff --git a/hadoop-assemblies/pom.xml b/hadoop-assemblies/pom.xml
index 66b6bdb..b53bacc 100644
--- a/hadoop-assemblies/pom.xml
+++ b/hadoop-assemblies/pom.xml
@@ -45,10 +45,10 @@
 configuration
   rules
 requireMavenVersion
-  version[3.0.0,)/version
+  version${enforced.maven.version}/version
 /requireMavenVersion
 requireJavaVersion
-  version1.6/version
+  version${enforced.java.version}/version
 /requireJavaVersion
   /rules
 /configuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-common-project/hadoop-annotations/pom.xml
--
diff --git a/hadoop-common-project/hadoop-annotations/pom.xml 
b/hadoop-common-project/hadoop-annotations/pom.xml
index 84a106e..c011b45 100644
--- a/hadoop-common-project/hadoop-annotations/pom.xml
+++ b/hadoop-common-project/hadoop-annotations/pom.xml
@@ -40,23 +40,6 @@
 
   profiles
 profile
-  idos.linux/id
-  activation
-os
-  family!Mac/family
-/os
-  /activation
-  dependencies
-dependency
-  groupIdjdk.tools/groupId
-  artifactIdjdk.tools/artifactId
-  version1.6/version
-  scopesystem/scope
-  systemPath${java.home}/../lib/tools.jar/systemPath
-/dependency
-  /dependencies
-/profile
-profile
   idjdk1.7/id
   activation
 jdk1.7/jdk

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a626388..616842f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -368,6 +368,8 @@ Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES
 
+HADOOP-10530 Make hadoop build on Java7+ only (stevel)
+
   NEW FEATURES
 
 HADOOP-10987. Provide an iterator-based listing API for FileSystem (kihwal)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index d3c404e..3b52dc3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -73,6 +73,17 @@
 zookeeper.version3.4.6/zookeeper.version
 
 tomcat.version6.0.41/tomcat.version
+
+!-- define the Java language version used by the compiler --
+javac.version1.7/javac.version
+
+!-- The java version enforced by the maven enforcer --
+!-- more complex patterns can be used here, such as
+   [${javac.version})
+for an open-ended enforcement
+--
+enforced.java.version[${javac.version},)/enforced.java.version
+enforced.maven.version[3.0.2,)/enforced.maven.version
   /properties
 
   

hadoop git commit: HDFS-7384. getfacl command and getAclStatus output should be in sync. Contributed by Vinayakumar B.

2014-12-08 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/trunk 144da2e46 - ffe942b82


HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ffe942b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ffe942b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ffe942b8

Branch: refs/heads/trunk
Commit: ffe942b82c1208bc7b22899da3a233944cb5ab52
Parents: 144da2e
Author: cnauroth cnaur...@apache.org
Authored: Mon Dec 8 10:23:09 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Mon Dec 8 10:23:09 2014 -0800

--
 .../apache/hadoop/fs/permission/AclEntry.java   |  4 +-
 .../apache/hadoop/fs/permission/AclStatus.java  | 79 +++-
 .../org/apache/hadoop/fs/shell/AclCommands.java | 32 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 19 +++--
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java |  4 +-
 .../tools/offlineImageViewer/FSImageLoader.java | 31 +++-
 .../org/apache/hadoop/hdfs/web/JsonUtil.java| 17 -
 .../hadoop-hdfs/src/main/proto/acl.proto|  1 +
 .../hadoop-hdfs/src/site/apt/WebHDFS.apt.vm |  1 +
 .../hdfs/server/namenode/FSAclBaseTest.java | 46 
 .../src/test/resources/testAclCLI.xml   | 53 +
 12 files changed, 246 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffe942b8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
index b65b7a0..b9def64 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
@@ -146,7 +146,9 @@ public class AclEntry {
  * @return Builder this builder, for call chaining
  */
 public Builder setName(String name) {
-  this.name = name;
+  if (name != null  !name.isEmpty()) {
+this.name = name;
+  }
   return this;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffe942b8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
index 4a7258f..9d7500a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
 import com.google.common.base.Objects;
+import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
 
 /**
@@ -36,6 +37,7 @@ public class AclStatus {
   private final String group;
   private final boolean stickyBit;
   private final ListAclEntry entries;
+  private final FsPermission permission;
 
   /**
* Returns the file owner.
@@ -73,6 +75,14 @@ public class AclStatus {
 return entries;
   }
 
+  /**
+   * Returns the permission set for the path
+   * @return {@link FsPermission} for the path
+   */
+  public FsPermission getPermission() {
+return permission;
+  }
+
   @Override
   public boolean equals(Object o) {
 if (o == null) {
@@ -113,6 +123,7 @@ public class AclStatus {
 private String group;
 private boolean stickyBit;
 private ListAclEntry entries = Lists.newArrayList();
+private FsPermission permission = null;
 
 /**
  * Sets the file owner.
@@ -173,12 +184,21 @@ public class AclStatus {
 }
 
 /**
+ * Sets the permission for the file.
+ * @param permission
+ */
+public Builder setPermission(FsPermission permission) {
+  this.permission = permission;
+  return this;
+}
+
+/**
  * Builds a new AclStatus populated with the set properties.
  *
  * @return AclStatus new AclStatus
  */
 public AclStatus build() {
-  return new AclStatus(owner, group, stickyBit, entries);
+  return new AclStatus(owner, group, stickyBit, entries, permission);
 }
   }
 
@@ -190,12 +210,67 @@ public class AclStatus {
* @param group 

hadoop git commit: HDFS-7384. getfacl command and getAclStatus output should be in sync. Contributed by Vinayakumar B.

2014-12-08 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 275561d84 - 143a5b67d


HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B.

(cherry picked from commit ffe942b82c1208bc7b22899da3a233944cb5ab52)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/143a5b67
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/143a5b67
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/143a5b67

Branch: refs/heads/branch-2
Commit: 143a5b67d87be0af62fedb46553a5d9835a90cb6
Parents: 275561d
Author: cnauroth cnaur...@apache.org
Authored: Mon Dec 8 10:23:09 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Mon Dec 8 10:28:25 2014 -0800

--
 .../apache/hadoop/fs/permission/AclEntry.java   |  4 +-
 .../apache/hadoop/fs/permission/AclStatus.java  | 79 +++-
 .../org/apache/hadoop/fs/shell/AclCommands.java | 32 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 19 +++--
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java |  4 +-
 .../tools/offlineImageViewer/FSImageLoader.java | 31 +++-
 .../org/apache/hadoop/hdfs/web/JsonUtil.java| 17 -
 .../hadoop-hdfs/src/main/proto/acl.proto|  1 +
 .../hadoop-hdfs/src/site/apt/WebHDFS.apt.vm |  1 +
 .../hdfs/server/namenode/FSAclBaseTest.java | 46 
 .../src/test/resources/testAclCLI.xml   | 53 +
 12 files changed, 246 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/143a5b67/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
index b65b7a0..b9def64 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
@@ -146,7 +146,9 @@ public class AclEntry {
  * @return Builder this builder, for call chaining
  */
 public Builder setName(String name) {
-  this.name = name;
+  if (name != null  !name.isEmpty()) {
+this.name = name;
+  }
   return this;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/143a5b67/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
index 4a7258f..9d7500a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
 import com.google.common.base.Objects;
+import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
 
 /**
@@ -36,6 +37,7 @@ public class AclStatus {
   private final String group;
   private final boolean stickyBit;
   private final ListAclEntry entries;
+  private final FsPermission permission;
 
   /**
* Returns the file owner.
@@ -73,6 +75,14 @@ public class AclStatus {
 return entries;
   }
 
+  /**
+   * Returns the permission set for the path
+   * @return {@link FsPermission} for the path
+   */
+  public FsPermission getPermission() {
+return permission;
+  }
+
   @Override
   public boolean equals(Object o) {
 if (o == null) {
@@ -113,6 +123,7 @@ public class AclStatus {
 private String group;
 private boolean stickyBit;
 private ListAclEntry entries = Lists.newArrayList();
+private FsPermission permission = null;
 
 /**
  * Sets the file owner.
@@ -173,12 +184,21 @@ public class AclStatus {
 }
 
 /**
+ * Sets the permission for the file.
+ * @param permission
+ */
+public Builder setPermission(FsPermission permission) {
+  this.permission = permission;
+  return this;
+}
+
+/**
  * Builds a new AclStatus populated with the set properties.
  *
  * @return AclStatus new AclStatus
  */
 public AclStatus build() {
-  return new AclStatus(owner, group, stickyBit, entries);
+  return new AclStatus(owner, group, stickyBit, entries, permission);
 }

hadoop git commit: HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid. Contributed by Akira AJISAKA.

2014-12-08 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/trunk ffe942b82 - d555bb212


HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d555bb21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d555bb21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d555bb21

Branch: refs/heads/trunk
Commit: d555bb2120cb44d094546e6b6560926561876c10
Parents: ffe942b
Author: cnauroth cnaur...@apache.org
Authored: Mon Dec 8 11:04:29 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Mon Dec 8 11:04:29 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java  | 2 +-
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml   | 3 ++-
 3 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d555bb21/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7fcc8d2..fabb98f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -550,6 +550,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7472. Fix typo in message of ReplicaNotFoundException.
 (Masatake Iwasaki via wheat9)
 
+HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0
+is invalid. (Akira AJISAKA via cnauroth)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d555bb21/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 82741ce..aee79af 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -242,7 +242,7 @@ public class FSDirectory implements Closeable {
 Preconditions.checkArgument(
 maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, Cannot set 
 + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY
-+  to a value less than 0 or greater than  + MAX_DIR_ITEMS);
++  to a value less than 1 or greater than  + MAX_DIR_ITEMS);
 
 int threshold = conf.getInt(
 DFSConfigKeys.DFS_NAMENODE_NAME_CACHE_THRESHOLD_KEY,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d555bb21/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 06d7ba8..55a876e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -314,7 +314,8 @@
   namedfs.namenode.fs-limits.max-directory-items/name
   value1048576/value
   descriptionDefines the maximum number of items that a directory may
-  contain.  A value of 0 will disable the check./description
+  contain. Cannot set the property to a value less than 1 or more than
+  640./description
 /property
 
 property



hadoop git commit: HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid. Contributed by Akira AJISAKA.

2014-12-08 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 143a5b67d - 059c4a372


HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA.

(cherry picked from commit d555bb2120cb44d094546e6b6560926561876c10)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/059c4a37
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/059c4a37
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/059c4a37

Branch: refs/heads/branch-2
Commit: 059c4a372f74969c26dbd51e7a9ba1adc51f9d02
Parents: 143a5b6
Author: cnauroth cnaur...@apache.org
Authored: Mon Dec 8 11:04:29 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Mon Dec 8 11:07:18 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java  | 2 +-
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml   | 3 ++-
 3 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/059c4a37/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5988b6e..324bbc2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -293,6 +293,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7472. Fix typo in message of ReplicaNotFoundException.
 (Masatake Iwasaki via wheat9)
 
+HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0
+is invalid. (Akira AJISAKA via cnauroth)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/059c4a37/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 82741ce..aee79af 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -242,7 +242,7 @@ public class FSDirectory implements Closeable {
 Preconditions.checkArgument(
 maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, Cannot set 
 + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY
-+  to a value less than 0 or greater than  + MAX_DIR_ITEMS);
++  to a value less than 1 or greater than  + MAX_DIR_ITEMS);
 
 int threshold = conf.getInt(
 DFSConfigKeys.DFS_NAMENODE_NAME_CACHE_THRESHOLD_KEY,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/059c4a37/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 4f489b4..83840db 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -314,7 +314,8 @@
   namedfs.namenode.fs-limits.max-directory-items/name
   value1048576/value
   descriptionDefines the maximum number of items that a directory may
-  contain.  A value of 0 will disable the check./description
+  contain. Cannot set the property to a value less than 1 or more than
+  640./description
 /property
 
 property



hadoop git commit: HADOOP-11354. ThrottledInputStream doesn't perform effective throttling. Contributed by Ted Yu.

2014-12-08 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/trunk d555bb212 - 57cb43be5


HADOOP-11354. ThrottledInputStream doesn't perform effective throttling. 
Contributed by Ted Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/57cb43be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/57cb43be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/57cb43be

Branch: refs/heads/trunk
Commit: 57cb43be50c81daad8da34d33a45f396d9c1c35b
Parents: d555bb2
Author: Jing Zhao ji...@apache.org
Authored: Mon Dec 8 11:08:17 2014 -0800
Committer: Jing Zhao ji...@apache.org
Committed: Mon Dec 8 11:08:39 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/tools/util/ThrottledInputStream.java   | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/57cb43be/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 616842f..d496276 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -516,6 +516,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11343. Overflow is not properly handled in caclulating final iv for
 AES CTR. (Jerry Chen via wang)
 
+HADOOP-11354. ThrottledInputStream doesn't perform effective throttling.
+(Ted Yu via jing9)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/57cb43be/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
index f6fe118..d08a301 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
@@ -115,7 +115,7 @@ public class ThrottledInputStream extends InputStream {
   }
 
   private void throttle() throws IOException {
-if (getBytesPerSec()  maxBytesPerSec) {
+while (getBytesPerSec()  maxBytesPerSec) {
   try {
 Thread.sleep(SLEEP_DURATION_MS);
 totalSleepTime += SLEEP_DURATION_MS;



hadoop git commit: HADOOP-11354. ThrottledInputStream doesn't perform effective throttling. Contributed by Ted Yu.

2014-12-08 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 059c4a372 - 582f96e41


HADOOP-11354. ThrottledInputStream doesn't perform effective throttling. 
Contributed by Ted Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/582f96e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/582f96e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/582f96e4

Branch: refs/heads/branch-2
Commit: 582f96e41dc3a739b5ca24ca117f7cbc9f3e7bc9
Parents: 059c4a3
Author: Jing Zhao ji...@apache.org
Authored: Mon Dec 8 11:08:17 2014 -0800
Committer: Jing Zhao ji...@apache.org
Committed: Mon Dec 8 11:09:34 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/tools/util/ThrottledInputStream.java   | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/582f96e4/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 617dfbb..33fa2f5 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -153,6 +153,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11343. Overflow is not properly handled in caclulating final iv for
 AES CTR. (Jerry Chen via wang)
 
+HADOOP-11354. ThrottledInputStream doesn't perform effective throttling.
+(Ted Yu via jing9)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/582f96e4/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
index f6fe118..d08a301 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
@@ -115,7 +115,7 @@ public class ThrottledInputStream extends InputStream {
   }
 
   private void throttle() throws IOException {
-if (getBytesPerSec()  maxBytesPerSec) {
+while (getBytesPerSec()  maxBytesPerSec) {
   try {
 Thread.sleep(SLEEP_DURATION_MS);
 totalSleepTime += SLEEP_DURATION_MS;



hadoop git commit: HDFS-7486. Consolidate XAttr-related implementation into a single class. Contributed by Haohui Mai.

2014-12-08 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 57cb43be5 - 6c5bbd7a4


HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6c5bbd7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6c5bbd7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6c5bbd7a

Branch: refs/heads/trunk
Commit: 6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9
Parents: 57cb43b
Author: Haohui Mai whe...@apache.org
Authored: Mon Dec 8 11:52:21 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Mon Dec 8 11:52:21 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/namenode/EncryptionZoneManager.java  |   3 +-
 .../hdfs/server/namenode/FSDirXAttrOp.java  | 460 +++
 .../hdfs/server/namenode/FSDirectory.java   | 295 ++--
 .../hdfs/server/namenode/FSEditLogLoader.java   |  19 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 227 +
 .../hdfs/server/namenode/TestFSDirectory.java   |  47 +-
 7 files changed, 554 insertions(+), 500 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c5bbd7a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index fabb98f..55026a2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -444,6 +444,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7384. 'getfacl' command and 'getAclStatus' output should be in sync.
 (Vinayakumar B via cnauroth)
 
+HDFS-7486. Consolidate XAttr-related implementation into a single class.
+(wheat9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c5bbd7a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 135979f..faab1f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -311,7 +311,8 @@ public class EncryptionZoneManager {
 xattrs.add(ezXAttr);
 // updating the xattr will call addEncryptionZone,
 // done this way to handle edit log loading
-dir.unprotectedSetXAttrs(src, xattrs, EnumSet.of(XAttrSetFlag.CREATE));
+FSDirXAttrOp.unprotectedSetXAttrs(dir, src, xattrs,
+  EnumSet.of(XAttrSetFlag.CREATE));
 return ezXAttr;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c5bbd7a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
new file mode 100644
index 000..303b9e3
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
@@ -0,0 +1,460 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import 

hadoop git commit: HDFS-7486. Consolidate XAttr-related implementation into a single class. Contributed by Haohui Mai.

2014-12-08 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 582f96e41 - 7198232b8


HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7198232b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7198232b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7198232b

Branch: refs/heads/branch-2
Commit: 7198232b827880fa9d3df38013fc8a4dfa15e99b
Parents: 582f96e
Author: Haohui Mai whe...@apache.org
Authored: Mon Dec 8 11:52:21 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Mon Dec 8 11:55:40 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/namenode/EncryptionZoneManager.java  |   3 +-
 .../hdfs/server/namenode/FSDirXAttrOp.java  | 460 +++
 .../hdfs/server/namenode/FSDirectory.java   | 295 ++--
 .../hdfs/server/namenode/FSEditLogLoader.java   |  19 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 229 +
 .../hdfs/server/namenode/TestFSDirectory.java   |  47 +-
 7 files changed, 555 insertions(+), 501 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7198232b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 324bbc2..037dd8c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -187,6 +187,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7384. 'getfacl' command and 'getAclStatus' output should be in sync.
 (Vinayakumar B via cnauroth)
 
+HDFS-7486. Consolidate XAttr-related implementation into a single class.
+(wheat9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7198232b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 135979f..faab1f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -311,7 +311,8 @@ public class EncryptionZoneManager {
 xattrs.add(ezXAttr);
 // updating the xattr will call addEncryptionZone,
 // done this way to handle edit log loading
-dir.unprotectedSetXAttrs(src, xattrs, EnumSet.of(XAttrSetFlag.CREATE));
+FSDirXAttrOp.unprotectedSetXAttrs(dir, src, xattrs,
+  EnumSet.of(XAttrSetFlag.CREATE));
 return ezXAttr;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7198232b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
new file mode 100644
index 000..303b9e3
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
@@ -0,0 +1,460 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import 

hadoop git commit: HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. Contributed by Arun Suresh.

2014-12-08 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6c5bbd7a4 - ddffcd8fa


HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. Contributed by Arun 
Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ddffcd8f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ddffcd8f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ddffcd8f

Branch: refs/heads/trunk
Commit: ddffcd8fac8af0ff78e63cca583af5c77a062891
Parents: 6c5bbd7
Author: Andrew Wang w...@apache.org
Authored: Mon Dec 8 13:44:44 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Mon Dec 8 13:45:19 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt  |  2 ++
 .../hadoop-kms/src/main/conf/kms-env.sh  |  6 ++
 hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh| 11 ++-
 .../hadoop-kms/src/site/apt/index.apt.vm |  9 +
 4 files changed, 27 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index d496276..d9219cc 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -519,6 +519,8 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11354. ThrottledInputStream doesn't perform effective throttling.
 (Ted Yu via jing9)
 
+HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. (Arun Suresh 
via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh 
b/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
index 88a2b86..44dfe6a 100644
--- a/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
+++ b/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
@@ -47,3 +47,9 @@
 # The password of the SSL keystore if using SSL
 #
 # export KMS_SSL_KEYSTORE_PASS=password
+
+# The full path to any native libraries that need to be loaded
+# (For eg. location of natively compiled tomcat Apache portable
+# runtime (APR) libraries
+#
+# export JAVA_LIBRARY_PATH=${HOME}/lib/native

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh 
b/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
index 24a1f54..f6ef6a5 100644
--- a/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
+++ b/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
@@ -31,7 +31,15 @@ BASEDIR=`cd ${BASEDIR}/..;pwd`
 
 KMS_SILENT=${KMS_SILENT:-true}
 
-source ${HADOOP_LIBEXEC_DIR:-${BASEDIR}/libexec}/kms-config.sh
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-${BASEDIR}/libexec}
+source ${HADOOP_LIBEXEC_DIR}/kms-config.sh
+
+
+if [ x$JAVA_LIBRARY_PATH = x ]; then
+  JAVA_LIBRARY_PATH=${HADOOP_LIBEXEC_DIR}/../lib/native/
+else
+  JAVA_LIBRARY_PATH=${HADOOP_LIBEXEC_DIR}/../lib/native/:${JAVA_LIBRARY_PATH}
+fi
 
 # The Java System property 'kms.http.port' it is not used by Kms,
 # it is used in Tomcat's server.xml configuration file
@@ -50,6 +58,7 @@ catalina_opts=${catalina_opts} 
-Dkms.admin.port=${KMS_ADMIN_PORT};
 catalina_opts=${catalina_opts} -Dkms.http.port=${KMS_HTTP_PORT};
 catalina_opts=${catalina_opts} -Dkms.max.threads=${KMS_MAX_THREADS};
 catalina_opts=${catalina_opts} 
-Dkms.ssl.keystore.file=${KMS_SSL_KEYSTORE_FILE};
+catalina_opts=${catalina_opts} -Djava.library.path=${JAVA_LIBRARY_PATH};
 
 print Adding to CATALINA_OPTS: ${catalina_opts}
 print Found KMS_SSL_KEYSTORE_PASS: `echo ${KMS_SSL_KEYSTORE_PASS} | sed 
's/./*/g'`

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
--
diff --git a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm 
b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
index 11b84d3..80d9a48 100644
--- a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
@@ -159,6 +159,15 @@ hadoop-${project.version} $ sbin/kms.sh start
   NOTE: You need to restart the KMS for the configuration changes to take
   effect.
 
+** Loading native libraries
+
+  The following environment variable (which can be set in KMS's
+  etc/hadoop/kms-env.sh 

hadoop git commit: HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. Contributed by Arun Suresh.

2014-12-08 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7198232b8 - 46a736516


HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. Contributed by Arun 
Suresh.

(cherry picked from commit ddffcd8fac8af0ff78e63cca583af5c77a062891)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/46a73651
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/46a73651
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/46a73651

Branch: refs/heads/branch-2
Commit: 46a7365164bfa00485ee43fafba2d22b22f9c773
Parents: 7198232
Author: Andrew Wang w...@apache.org
Authored: Mon Dec 8 13:44:44 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Mon Dec 8 13:45:34 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt  |  2 ++
 .../hadoop-kms/src/main/conf/kms-env.sh  |  6 ++
 hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh| 11 ++-
 .../hadoop-kms/src/site/apt/index.apt.vm |  9 +
 4 files changed, 27 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/46a73651/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 33fa2f5..e82e357 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -156,6 +156,8 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11354. ThrottledInputStream doesn't perform effective throttling.
 (Ted Yu via jing9)
 
+HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. (Arun Suresh 
via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/46a73651/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh 
b/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
index 88a2b86..44dfe6a 100644
--- a/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
+++ b/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
@@ -47,3 +47,9 @@
 # The password of the SSL keystore if using SSL
 #
 # export KMS_SSL_KEYSTORE_PASS=password
+
+# The full path to any native libraries that need to be loaded
+# (For eg. location of natively compiled tomcat Apache portable
+# runtime (APR) libraries
+#
+# export JAVA_LIBRARY_PATH=${HOME}/lib/native

http://git-wip-us.apache.org/repos/asf/hadoop/blob/46a73651/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh 
b/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
index 24a1f54..f6ef6a5 100644
--- a/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
+++ b/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
@@ -31,7 +31,15 @@ BASEDIR=`cd ${BASEDIR}/..;pwd`
 
 KMS_SILENT=${KMS_SILENT:-true}
 
-source ${HADOOP_LIBEXEC_DIR:-${BASEDIR}/libexec}/kms-config.sh
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-${BASEDIR}/libexec}
+source ${HADOOP_LIBEXEC_DIR}/kms-config.sh
+
+
+if [ x$JAVA_LIBRARY_PATH = x ]; then
+  JAVA_LIBRARY_PATH=${HADOOP_LIBEXEC_DIR}/../lib/native/
+else
+  JAVA_LIBRARY_PATH=${HADOOP_LIBEXEC_DIR}/../lib/native/:${JAVA_LIBRARY_PATH}
+fi
 
 # The Java System property 'kms.http.port' it is not used by Kms,
 # it is used in Tomcat's server.xml configuration file
@@ -50,6 +58,7 @@ catalina_opts=${catalina_opts} 
-Dkms.admin.port=${KMS_ADMIN_PORT};
 catalina_opts=${catalina_opts} -Dkms.http.port=${KMS_HTTP_PORT};
 catalina_opts=${catalina_opts} -Dkms.max.threads=${KMS_MAX_THREADS};
 catalina_opts=${catalina_opts} 
-Dkms.ssl.keystore.file=${KMS_SSL_KEYSTORE_FILE};
+catalina_opts=${catalina_opts} -Djava.library.path=${JAVA_LIBRARY_PATH};
 
 print Adding to CATALINA_OPTS: ${catalina_opts}
 print Found KMS_SSL_KEYSTORE_PASS: `echo ${KMS_SSL_KEYSTORE_PASS} | sed 
's/./*/g'`

http://git-wip-us.apache.org/repos/asf/hadoop/blob/46a73651/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
--
diff --git a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm 
b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
index 5d67b3b..88e3cff 100644
--- a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
@@ -159,6 +159,15 @@ hadoop-${project.version} $ sbin/kms.sh start
   NOTE: You need to restart the KMS for the configuration changes to take
   effect.
 
+** Loading native libraries
+
+  The 

[08/41] hadoop git commit: HDFS-7448 TestBookKeeperHACheckpoints fails in trunk -move CHANGES.TXT entry

2014-12-08 Thread kasha
HDFS-7448 TestBookKeeperHACheckpoints fails in trunk -move CHANGES.TXT entry


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/22afae89
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/22afae89
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/22afae89

Branch: refs/heads/YARN-2139
Commit: 22afae890d7cf34a9be84590e7457774755b7a4a
Parents: e65b7c5
Author: Steve Loughran ste...@apache.org
Authored: Wed Dec 3 12:21:42 2014 +
Committer: Steve Loughran ste...@apache.org
Committed: Wed Dec 3 12:21:42 2014 +

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/22afae89/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 85d00b7..1679a71 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -257,9 +257,6 @@ Trunk (Unreleased)
 
 HDFS-7407. Minor typo in privileged pid/out/log names (aw)
 
-HDFS-7448 TestBookKeeperHACheckpoints fails in trunk build
-(Akira Ajisaka via stevel)
-
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -522,6 +519,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7444. convertToBlockUnderConstruction should preserve BlockCollection.
 (wheat9)
 
+HDFS-7448 TestBookKeeperHACheckpoints fails in trunk build
+(Akira Ajisaka via stevel)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES



[30/41] hadoop git commit: HADOOP-11343. Overflow is not properly handled in caclulating final iv for AES CTR. Contributed by Jerry Chen.

2014-12-08 Thread kasha
HADOOP-11343. Overflow is not properly handled in caclulating final iv for AES 
CTR. Contributed by Jerry Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0707e4ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0707e4ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0707e4ec

Branch: refs/heads/YARN-2139
Commit: 0707e4eca906552c960e3b8c4e20d9913145eca6
Parents: e69af83
Author: Andrew Wang w...@apache.org
Authored: Fri Dec 5 18:20:19 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Fri Dec 5 18:20:19 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +
 .../apache/hadoop/crypto/AesCtrCryptoCodec.java | 27 -
 .../apache/hadoop/crypto/TestCryptoCodec.java   | 64 
 3 files changed, 79 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0707e4ec/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7a6a938..965c6d3 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -508,6 +508,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11355. When accessing data in HDFS and the key has been deleted,
 a Null Pointer Exception is shown. (Arun Suresh via wang)
 
+HADOOP-11343. Overflow is not properly handled in caclulating final iv for
+AES CTR. (Jerry Chen via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0707e4ec/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
index 8f8bc66..5e286b9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
@@ -33,7 +33,6 @@ public abstract class AesCtrCryptoCodec extends CryptoCodec {
* @see http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
*/
   private static final int AES_BLOCK_SIZE = SUITE.getAlgorithmBlockSize();
-  private static final int CTR_OFFSET = 8;
 
   @Override
   public CipherSuite getCipherSuite() {
@@ -48,20 +47,18 @@ public abstract class AesCtrCryptoCodec extends CryptoCodec 
{
   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
 Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
 Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
-
-System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
-long l = 0;
-for (int i = 0; i  8; i++) {
-  l = ((l  8) | (initIV[CTR_OFFSET + i]  0xff));
+
+int i = IV.length; // IV length
+int j = 0; // counter bytes index
+int sum = 0;
+while (i--  0) {
+  // (sum  Byte.SIZE) is the carry for addition
+  sum = (initIV[i]  0xff) + (sum  Byte.SIZE);
+  if (j++  8) { // Big-endian, and long is 8 bytes length
+sum += (byte) counter  0xff;
+counter = 8;
+  }
+  IV[i] = (byte) sum;
 }
-l += counter;
-IV[CTR_OFFSET + 0] = (byte) (l  56);
-IV[CTR_OFFSET + 1] = (byte) (l  48);
-IV[CTR_OFFSET + 2] = (byte) (l  40);
-IV[CTR_OFFSET + 3] = (byte) (l  32);
-IV[CTR_OFFSET + 4] = (byte) (l  24);
-IV[CTR_OFFSET + 5] = (byte) (l  16);
-IV[CTR_OFFSET + 6] = (byte) (l  8);
-IV[CTR_OFFSET + 7] = (byte) (l);
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0707e4ec/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
index 79987ce..08231f9 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
@@ -23,7 +23,9 @@ import static org.junit.Assert.assertTrue;
 import java.io.BufferedInputStream;
 import java.io.DataInputStream;
 import java.io.IOException;
+import java.math.BigInteger;
 import java.security.GeneralSecurityException;
+import java.security.SecureRandom;
 import java.util.Arrays;
 

[06/41] hadoop git commit: HDFS-6735. A minor optimization to avoid pread() be blocked by read() inside the same DFSInputStream (Lars Hofhansl via stack)

2014-12-08 Thread kasha
HDFS-6735. A minor optimization to avoid pread() be blocked by read() inside 
the same DFSInputStream (Lars Hofhansl via stack)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7caa3bc9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7caa3bc9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7caa3bc9

Branch: refs/heads/YARN-2139
Commit: 7caa3bc98e6880f98c5c32c486a0c539f9fd3f5f
Parents: 92ce6ed
Author: stack st...@apache.org
Authored: Tue Dec 2 20:54:03 2014 -0800
Committer: stack st...@duboce.net
Committed: Tue Dec 2 20:57:38 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../dev-support/findbugsExcludeFile.xml |   9 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  | 320 +++
 .../hadoop/hdfs/protocol/LocatedBlocks.java |   9 +-
 .../hdfs/shortcircuit/ShortCircuitCache.java|  19 +-
 .../hdfs/shortcircuit/ShortCircuitReplica.java  |  12 +-
 6 files changed, 217 insertions(+), 155 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7caa3bc9/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 13dda88..85d00b7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -422,6 +422,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7446. HDFS inotify should have the ability to determine what txid it
 has read up to (cmccabe)
 
+HDFS-6735. A minor optimization to avoid pread() be blocked by read()
+inside the same DFSInputStream (Lars Hofhansl via stack)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7caa3bc9/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
index 2ddc4cc..dedeece 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
@@ -207,4 +207,13 @@
   Bug pattern=NP_LOAD_OF_KNOWN_NULL_VALUE /
 /Match
 
+!--
+ We use a separate lock to guard cachingStrategy in order to separate
+ locks for p-reads from seek + read invocations.
+--
+Match
+Class name=org.apache.hadoop.hdfs.DFSInputStream /
+Field name=cachingStrategy /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+/Match
  /FindBugsFilter

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7caa3bc9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index 9794eec..b8b1d90 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -92,17 +92,32 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   private final DFSClient dfsClient;
   private boolean closed = false;
   private final String src;
-  private BlockReader blockReader = null;
   private final boolean verifyChecksum;
-  private LocatedBlocks locatedBlocks = null;
-  private long lastBlockBeingWrittenLength = 0;
-  private FileEncryptionInfo fileEncryptionInfo = null;
+
+  // state by stateful read only:
+  // (protected by lock on this)
+  /
   private DatanodeInfo currentNode = null;
   private LocatedBlock currentLocatedBlock = null;
   private long pos = 0;
   private long blockEnd = -1;
+  private BlockReader blockReader = null;
+  
+
+  // state shared by stateful and positional read:
+  // (protected by lock on infoLock)
+  
+  private LocatedBlocks locatedBlocks = null;
+  private long lastBlockBeingWrittenLength = 0;
+  private FileEncryptionInfo fileEncryptionInfo = null;
   private CachingStrategy cachingStrategy;
+  
+
   private final ReadStatistics readStatistics = new ReadStatistics();
+  // lock for state shared between read and pread
+  // Note: Never acquire a lock on this with this lock held to avoid 
deadlocks
+  //   (it's OK to acquire this lock when the lock on this is held)
+  private final Object infoLock = new Object();
 
   /**
* Track the ByteBuffers that we have handed out to readers.
@@ -226,35 +241,38 @@ implements ByteBufferReadable, 

[28/41] hadoop git commit: HDFS-7474. Avoid resolving path in FSPermissionChecker. Contributed by Jing Zhao.

2014-12-08 Thread kasha
HDFS-7474. Avoid resolving path in FSPermissionChecker. Contributed by Jing 
Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/475c6b49
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/475c6b49
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/475c6b49

Branch: refs/heads/YARN-2139
Commit: 475c6b4978045d55d1ebcea69cc9a2f24355aca2
Parents: 4b13082
Author: Jing Zhao ji...@apache.org
Authored: Fri Dec 5 14:17:17 2014 -0800
Committer: Jing Zhao ji...@apache.org
Committed: Fri Dec 5 14:17:17 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   2 +
 .../server/namenode/EncryptionZoneManager.java  |   4 +-
 .../hdfs/server/namenode/FSDirConcatOp.java |   9 +-
 .../hdfs/server/namenode/FSDirMkdirOp.java  |  17 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |  20 +-
 .../hdfs/server/namenode/FSDirSnapshotOp.java   |  48 ++--
 .../server/namenode/FSDirStatAndListingOp.java  |  35 +--
 .../hdfs/server/namenode/FSDirectory.java   |  99 +++
 .../hdfs/server/namenode/FSEditLogLoader.java   |  16 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 270 ---
 .../server/namenode/FSPermissionChecker.java|  22 +-
 .../hdfs/server/namenode/INodesInPath.java  |  21 +-
 .../namenode/snapshot/SnapshotManager.java  |  50 ++--
 .../namenode/TestFSPermissionChecker.java   |  10 +-
 .../server/namenode/TestSnapshotPathINodes.java |  20 +-
 .../namenode/snapshot/TestSnapshotManager.java  |  14 +-
 16 files changed, 295 insertions(+), 362 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/475c6b49/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 22f462f..d4db732 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -433,6 +433,8 @@ Release 2.7.0 - UNRELEASED
 HDFS-7478. Move org.apache.hadoop.hdfs.server.namenode.NNConf to
 FSNamesystem. (Li Lu via wheat9)
 
+HDFS-7474. Avoid resolving path in FSPermissionChecker. (jing9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/475c6b49/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 0d7ced9..135979f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -285,12 +285,12 @@ public class EncryptionZoneManager {
   CryptoProtocolVersion version, String keyName)
   throws IOException {
 assert dir.hasWriteLock();
-if (dir.isNonEmptyDirectory(src)) {
+final INodesInPath srcIIP = dir.getINodesInPath4Write(src, false);
+if (dir.isNonEmptyDirectory(srcIIP)) {
   throw new IOException(
   Attempt to create an encryption zone for a non-empty directory.);
 }
 
-final INodesInPath srcIIP = dir.getINodesInPath4Write(src, false);
 if (srcIIP != null 
 srcIIP.getLastINode() != null 
 !srcIIP.getLastINode().isDirectory()) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/475c6b49/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
index 12feb33..c2e0f08 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
@@ -53,15 +53,17 @@ class FSDirConcatOp {
   }
 }
 
+final INodesInPath trgIip = fsd.getINodesInPath4Write(target);
 // write permission for the target
 if (fsd.isPermissionEnabled()) {
   FSPermissionChecker pc = fsd.getPermissionChecker();
-  fsd.checkPathAccess(pc, target, FsAction.WRITE);
+  fsd.checkPathAccess(pc, trgIip, FsAction.WRITE);
 
   // and srcs
   for(String aSrc: 

[25/41] hadoop git commit: HADOOP-11355. When accessing data in HDFS and the key has been deleted, a Null Pointer Exception is shown. Contributed by Arun Suresh.

2014-12-08 Thread kasha
HADOOP-11355. When accessing data in HDFS and the key has been deleted, a Null 
Pointer Exception is shown. Contributed by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9cdaec6a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9cdaec6a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9cdaec6a

Branch: refs/heads/YARN-2139
Commit: 9cdaec6a6f6cb1680ad6e44d7b0c8d70cdcbe3fa
Parents: f6452eb
Author: Andrew Wang w...@apache.org
Authored: Fri Dec 5 12:01:23 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Fri Dec 5 12:01:23 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt  | 3 +++
 .../crypto/key/kms/server/KeyAuthorizationKeyProvider.java   | 4 
 .../org/apache/hadoop/crypto/key/kms/server/TestKMS.java | 8 
 3 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9cdaec6a/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2f88fc8..7a6a938 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -505,6 +505,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11348. Remove unused variable from CMake error message for finding
 openssl (Dian Fu via Colin P. McCabe)
 
+HADOOP-11355. When accessing data in HDFS and the key has been deleted,
+a Null Pointer Exception is shown. (Arun Suresh via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9cdaec6a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
index 4ce9611..074f1fb 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java
@@ -240,6 +240,10 @@ public class KeyAuthorizationKeyProvider extends 
KeyProviderCryptoExtension {
 String kn = ekv.getEncryptionKeyName();
 String kvn = ekv.getEncryptionKeyVersionName();
 KeyVersion kv = provider.getKeyVersion(kvn);
+if (kv == null) {
+  throw new IllegalArgumentException(String.format(
+  '%s' not found, kvn));
+}
 if (!kv.getName().equals(kn)) {
   throw new IllegalArgumentException(String.format(
   KeyVersion '%s' does not belong to the key '%s', kvn, kn));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9cdaec6a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
index b9409ca..61ce807 100644
--- 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
@@ -498,6 +498,14 @@ public class TestKMS {
 // deleteKey()
 kp.deleteKey(k1);
 
+// Check decryption after Key deletion
+try {
+  kpExt.decryptEncryptedKey(ek1);
+  Assert.fail(Should not be allowed !!);
+} catch (Exception e) {
+  Assert.assertTrue(e.getMessage().contains('k1@1' not found));
+}
+
 // getKey()
 Assert.assertNull(kp.getKeyVersion(k1));
 



[03/41] hadoop git commit: YARN-2894. Fixed a bug regarding application view acl when RM fails over. Contributed by Rohith Sharmaks

2014-12-08 Thread kasha
YARN-2894. Fixed a bug regarding application view acl when RM fails over. 
Contributed by Rohith Sharmaks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/392c3aae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/392c3aae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/392c3aae

Branch: refs/heads/YARN-2139
Commit: 392c3aaea8e8f156b76e418157fa347256283c56
Parents: 75a326a
Author: Jian He jia...@apache.org
Authored: Tue Dec 2 17:16:20 2014 -0800
Committer: Jian He jia...@apache.org
Committed: Tue Dec 2 17:16:35 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../server/resourcemanager/webapp/AppBlock.java | 20 +++-
 .../resourcemanager/webapp/AppsBlock.java   |  7 ---
 .../webapp/DefaultSchedulerPage.java|  4 ++--
 .../webapp/FairSchedulerAppsBlock.java  |  9 +
 .../webapp/MetricsOverviewTable.java|  9 +++--
 .../resourcemanager/webapp/NodesPage.java   |  9 +++--
 .../server/resourcemanager/webapp/RMWebApp.java |  7 ---
 .../resourcemanager/webapp/RMWebServices.java   |  2 +-
 .../webapp/dao/ClusterMetricsInfo.java  |  3 +--
 .../webapp/dao/UserMetricsInfo.java |  3 +--
 .../webapp/TestRMWebServices.java   |  8 
 .../webapp/TestRMWebServicesApps.java   |  7 ---
 .../TestRMWebServicesAppsModification.java  |  7 ---
 .../webapp/TestRMWebServicesCapacitySched.java  |  7 ---
 .../TestRMWebServicesDelegationTokens.java  |  7 ---
 .../webapp/TestRMWebServicesFairScheduler.java  |  7 ---
 .../webapp/TestRMWebServicesNodeLabels.java |  3 ---
 .../webapp/TestRMWebServicesNodes.java  |  7 ---
 19 files changed, 30 insertions(+), 99 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/392c3aae/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 3744a1ecb..1e336b7 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -169,6 +169,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2890. MiniYARNCluster should start the timeline server based on the
 configuration. (Mit Desai via zjshen)
 
+YARN-2894. Fixed a bug regarding application view acl when RM fails over.
+(Rohith Sharmaks via jianhe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/392c3aae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
index a108e43..1856d75 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
@@ -40,10 +40,8 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppMetrics;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics;
-import org.apache.hadoop.yarn.server.resourcemanager.security.QueueACLsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppAttemptInfo;
 import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo;
-import org.apache.hadoop.yarn.server.security.ApplicationACLsManager;
 import org.apache.hadoop.yarn.util.Apps;
 import org.apache.hadoop.yarn.util.Times;
 import org.apache.hadoop.yarn.util.resource.Resources;
@@ -58,18 +56,14 @@ import com.google.inject.Inject;
 
 public class AppBlock extends HtmlBlock {
 
-  private ApplicationACLsManager aclsManager;
-  private QueueACLsManager queueACLsManager;
   private final Configuration conf;
+  private final ResourceManager rm;
 
   @Inject
-  AppBlock(ResourceManager rm, ViewContext ctx,
-  ApplicationACLsManager aclsManager, QueueACLsManager queueACLsManager,
-  Configuration conf) {
+  AppBlock(ResourceManager rm, ViewContext ctx, Configuration conf) {
   

[12/41] hadoop git commit: YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute any further apps. (Naganarasimha G R via kasha)

2014-12-08 Thread kasha
YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute any 
further apps. (Naganarasimha G R via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/799353e2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/799353e2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/799353e2

Branch: refs/heads/YARN-2139
Commit: 799353e2c7db5af6e40e3521439b5c8a3c5c6a51
Parents: a1e8225
Author: Karthik Kambatla ka...@apache.org
Authored: Wed Dec 3 13:44:41 2014 -0800
Committer: Karthik Kambatla ka...@apache.org
Committed: Wed Dec 3 13:44:41 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../security/DelegationTokenRenewer.java| 12 ++--
 2 files changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/799353e2/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 421e5ea..d44f46d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -175,6 +175,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2894. Fixed a bug regarding application view acl when RM fails over.
 (Rohith Sharmaks via jianhe)
 
+YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute
+any further apps. (Naganarasimha G R via kasha)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/799353e2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
index 2dc331e..cca6e8d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
@@ -20,7 +20,6 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.security;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
-import java.security.PrivilegedAction;
 import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
 import java.util.Collection;
@@ -39,6 +38,7 @@ import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.ThreadPoolExecutor;
 import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
@@ -445,15 +445,15 @@ public class DelegationTokenRenewer extends 
AbstractService {
*/
   private class RenewalTimerTask extends TimerTask {
 private DelegationTokenToRenew dttr;
-private boolean cancelled = false;
+private AtomicBoolean cancelled = new AtomicBoolean(false);
 
 RenewalTimerTask(DelegationTokenToRenew t) {  
   dttr = t;  
 }
 
 @Override
-public synchronized void run() {
-  if (cancelled) {
+public void run() {
+  if (cancelled.get()) {
 return;
   }
 
@@ -475,8 +475,8 @@ public class DelegationTokenRenewer extends AbstractService 
{
 }
 
 @Override
-public synchronized boolean cancel() {
-  cancelled = true;
+public boolean cancel() {
+  cancelled.set(true);
   return super.cancel();
 }
   }



[02/41] hadoop git commit: HDFS-7446. HDFS inotify should have the ability to determine what txid it has read up to (cmccabe)

2014-12-08 Thread kasha
HDFS-7446. HDFS inotify should have the ability to determine what txid it has 
read up to (cmccabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/75a326aa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/75a326aa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/75a326aa

Branch: refs/heads/YARN-2139
Commit: 75a326aaff8c92349701d9b3473c3070b8c2be44
Parents: 185e0c7
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Nov 25 17:44:34 2014 -0800
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Tue Dec 2 17:15:21 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/DFSInotifyEventInputStream.java |  65 ++--
 .../apache/hadoop/hdfs/inotify/EventBatch.java  |  41 +++
 .../hadoop/hdfs/inotify/EventBatchList.java |  63 
 .../apache/hadoop/hdfs/inotify/EventsList.java  |  63 
 .../hadoop/hdfs/protocol/ClientProtocol.java|   8 +-
 .../ClientNamenodeProtocolTranslatorPB.java |   4 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 341 ++-
 .../namenode/InotifyFSEditLogOpTranslator.java  |  74 ++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |  23 +-
 .../hadoop-hdfs/src/main/proto/inotify.proto|  10 +-
 .../hdfs/TestDFSInotifyEventInputStream.java| 209 +++-
 12 files changed, 516 insertions(+), 388 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/75a326aa/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 4d2fb05..13dda88 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -419,6 +419,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7462. Consolidate implementation of mkdirs() into a single class.
 (wheat9)
 
+HDFS-7446. HDFS inotify should have the ability to determine what txid it
+has read up to (cmccabe)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/75a326aa/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
index 73c5f55..83b92b9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
@@ -19,11 +19,10 @@
 package org.apache.hadoop.hdfs;
 
 import com.google.common.collect.Iterators;
-import com.google.common.util.concurrent.UncheckedExecutionException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.hdfs.inotify.Event;
-import org.apache.hadoop.hdfs.inotify.EventsList;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.EventBatchList;
 import org.apache.hadoop.hdfs.inotify.MissingEventsException;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.util.Time;
@@ -33,13 +32,7 @@ import org.slf4j.LoggerFactory;
 import java.io.IOException;
 import java.util.Iterator;
 import java.util.Random;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
 
 /**
  * Stream for reading inotify events. DFSInotifyEventInputStreams should not
@@ -52,7 +45,7 @@ public class DFSInotifyEventInputStream {
   .class);
 
   private final ClientProtocol namenode;
-  private IteratorEvent it;
+  private IteratorEventBatch it;
   private long lastReadTxid;
   /**
* The most recent txid the NameNode told us it has sync'ed -- helps us
@@ -78,22 +71,22 @@ public class DFSInotifyEventInputStream {
   }
 
   /**
-   * Returns the next event in the stream or null if no new events are 
currently
-   * available.
+   * Returns the next batch of events in the stream or null if no new
+   * batches are currently available.
*
* @throws IOException because of network error or edit log
* corruption. Also possible if JournalNodes are unresponsive in the
* QJM setting (even one unresponsive JournalNode is enough in rare cases),
* so catching this exception and retrying at least a few 

[22/41] hadoop git commit: HADOOP-11356. Removed deprecated o.a.h.fs.permission.AccessControlException. Contributed by Li Lu.

2014-12-08 Thread kasha
HADOOP-11356. Removed deprecated o.a.h.fs.permission.AccessControlException. 
Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2829b7a9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2829b7a9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2829b7a9

Branch: refs/heads/YARN-2139
Commit: 2829b7a96ffe6d2ca5e81689c7957e4e97042f2d
Parents: 0653918
Author: Haohui Mai whe...@apache.org
Authored: Fri Dec 5 10:49:43 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Fri Dec 5 10:49:43 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +
 .../fs/permission/AccessControlException.java   | 66 
 .../hadoop/security/AccessControlException.java |  4 +-
 3 files changed, 5 insertions(+), 68 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2829b7a9/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index f2a086e..2f88fc8 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -410,6 +410,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11301. [optionally] update jmx cache to drop old metrics
 (Maysam Yabandeh via stack)
 
+HADOOP-11356. Removed deprecated 
o.a.h.fs.permission.AccessControlException.
+(Li Lu via wheat9)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2829b7a9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
deleted file mode 100644
index 1cd6395..000
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AccessControlException.java
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.fs.permission;
-
-import java.io.IOException;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-
-/**
- * An exception class for access control related issues.
- * @deprecated Use {@link org.apache.hadoop.security.AccessControlException} 
- * instead.
- */
-@Deprecated
-@InterfaceAudience.Public
-@InterfaceStability.Stable
-public class AccessControlException extends IOException {
-  //Required by {@link java.io.Serializable}.
-  private static final long serialVersionUID = 1L;
-
-  /**
-   * Default constructor is needed for unwrapping from 
-   * {@link org.apache.hadoop.ipc.RemoteException}.
-   */
-  public AccessControlException() {
-super(Permission denied.);
-  }
-
-  /**
-   * Constructs an {@link AccessControlException}
-   * with the specified detail message.
-   * @param s the detail message.
-   */
-  public AccessControlException(String s) {
-super(s);
-  }
-  
-  /**
-   * Constructs a new exception with the specified cause and a detail
-   * message of tt(cause==null ? null : cause.toString())/tt (which
-   * typically contains the class and detail message of ttcause/tt).
-   * @param  cause the cause (which is saved for later retrieval by the
-   * {@link #getCause()} method).  (A ttnull/tt value is
-   * permitted, and indicates that the cause is nonexistent or
-   * unknown.)
-   */
-  public AccessControlException(Throwable cause) {
-super(cause);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2829b7a9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AccessControlException.java

[24/41] hadoop git commit: HDFS-7472. Fix typo in message of ReplicaNotFoundException. Contributed by Masatake Iwasaki.

2014-12-08 Thread kasha
HDFS-7472. Fix typo in message of ReplicaNotFoundException. Contributed by 
Masatake Iwasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f6452eb2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f6452eb2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f6452eb2

Branch: refs/heads/YARN-2139
Commit: f6452eb2592a9350bc3f6ce1e354ea55b275ff83
Parents: 6a5596e
Author: Haohui Mai whe...@apache.org
Authored: Fri Dec 5 11:23:13 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Fri Dec 5 11:23:13 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../hadoop/hdfs/server/datanode/ReplicaNotFoundException.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6452eb2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c6cb185..22f462f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -536,6 +536,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7448 TestBookKeeperHACheckpoints fails in trunk build
 (Akira Ajisaka via stevel)
 
+HDFS-7472. Fix typo in message of ReplicaNotFoundException.
+(Masatake Iwasaki via wheat9)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6452eb2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
index 124574b..b159d3a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaNotFoundException.java
@@ -37,7 +37,7 @@ public class ReplicaNotFoundException extends IOException {
   public final static String NON_EXISTENT_REPLICA =
 Cannot append to a non-existent replica ;
   public final static String UNEXPECTED_GS_REPLICA =
-Cannot append to a replica with unexpeted generation stamp ;
+Cannot append to a replica with unexpected generation stamp ;
 
   public ReplicaNotFoundException() {
 super();



[13/41] hadoop git commit: YARN-2891. Failed Container Executor does not provide a clear error message. Contributed by Dustin Cote. (harsh)

2014-12-08 Thread kasha
YARN-2891. Failed Container Executor does not provide a clear error message. 
Contributed by Dustin Cote. (harsh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a31e0164
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a31e0164
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a31e0164

Branch: refs/heads/YARN-2139
Commit: a31e0164912236630c485e5aeb908b43e3a67c61
Parents: 799353e
Author: Harsh J ha...@cloudera.com
Authored: Thu Dec 4 03:16:08 2014 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Thu Dec 4 03:17:15 2014 +0530

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../src/main/native/container-executor/impl/container-executor.c  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a31e0164/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index d44f46d..91151ad 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -59,6 +59,9 @@ Release 2.7.0 - UNRELEASED
 
   IMPROVEMENTS
 
+YARN-2891. Failed Container Executor does not provide a clear error
+message. (Dustin Cote via harsh)
+
 YARN-1979. TestDirectoryCollection fails when the umask is unusual.
 (Vinod Kumar Vavilapalli and Tsuyoshi OZAWA via junping_du)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a31e0164/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 9af9161..4fc78b6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -526,7 +526,7 @@ int check_dir(char* npath, mode_t st_mode, mode_t desired, 
int finalComponent) {
 int filePermInt = st_mode  (S_IRWXU | S_IRWXG | S_IRWXO);
 int desiredInt = desired  (S_IRWXU | S_IRWXG | S_IRWXO);
 if (filePermInt != desiredInt) {
-  fprintf(LOGFILE, Path %s does not have desired permission.\n, npath);
+  fprintf(LOGFILE, Path %s has permission %o but needs permission %o.\n, 
npath, filePermInt, desiredInt);
   return -1;
 }
   }



[18/41] hadoop git commit: YARN-2301. Improved yarn container command. Contributed by Naganarasimha G R

2014-12-08 Thread kasha
YARN-2301. Improved yarn container command. Contributed by Naganarasimha G R


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/258623ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/258623ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/258623ff

Branch: refs/heads/YARN-2139
Commit: 258623ff8bb1a1057ae3501d4f20982d5a59ea34
Parents: 565b0e6
Author: Jian He jia...@apache.org
Authored: Thu Dec 4 12:51:15 2014 -0800
Committer: Jian He jia...@apache.org
Committed: Thu Dec 4 12:53:18 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |  2 +
 .../hadoop/yarn/client/cli/ApplicationCLI.java  |  8 +++-
 .../hadoop/yarn/client/cli/TestYarnCLI.java | 41 +---
 .../yarn/server/resourcemanager/RMContext.java  |  3 ++
 .../server/resourcemanager/RMContextImpl.java   | 11 ++
 .../server/resourcemanager/ResourceManager.java |  2 +
 .../rmcontainer/RMContainerImpl.java|  9 -
 .../resourcemanager/TestClientRMService.java|  1 +
 .../rmcontainer/TestRMContainerImpl.java|  6 ++-
 9 files changed, 63 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/258623ff/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 30b9260..f032b4f 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -121,6 +121,8 @@ Release 2.7.0 - UNRELEASED
 YARN-1156. Enhance NodeManager AllocatedGB and AvailableGB metrics 
 for aggregation of decimal values. (Tsuyoshi OZAWA via junping_du)
 
+YARN-2301. Improved yarn container command. (Naganarasimha G R via jianhe)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/258623ff/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
index a847cd5..83d212d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
@@ -44,6 +44,7 @@ import 
org.apache.hadoop.yarn.api.records.YarnApplicationState;
 import org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.util.ConverterUtils;
+import org.apache.hadoop.yarn.util.Times;
 
 import com.google.common.annotations.VisibleForTesting;
 
@@ -536,8 +537,11 @@ public class ApplicationCLI extends YarnCLI {
 writer.printf(CONTAINER_PATTERN, Container-Id, Start Time,
 Finish Time, State, Host, LOG-URL);
 for (ContainerReport containerReport : appsReport) {
-  writer.printf(CONTAINER_PATTERN, containerReport.getContainerId(),
-  containerReport.getCreationTime(), containerReport.getFinishTime(),
+  writer.printf(
+  CONTAINER_PATTERN,
+  containerReport.getContainerId(),
+  Times.format(containerReport.getCreationTime()),
+  Times.format(containerReport.getFinishTime()),  
   containerReport.getContainerState(), containerReport
   .getAssignedNode(), containerReport.getLogUrl());
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/258623ff/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
index 9d9a86a..194d7d1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
@@ -32,19 +32,17 @@ import java.io.ByteArrayOutputStream;
 import java.io.IOException;
 import java.io.PrintStream;
 import java.io.PrintWriter;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Date;
 import java.util.EnumSet;
-import java.util.HashMap;
 import 

[20/41] hadoop git commit: YARN-2189. [YARN-1492] Admin service for cache manager. (Chris Trezzo via kasha)

2014-12-08 Thread kasha
YARN-2189. [YARN-1492] Admin service for cache manager. (Chris Trezzo via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/78968155
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/78968155
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/78968155

Branch: refs/heads/YARN-2139
Commit: 78968155d7f87f2147faf96c5eef9c23dba38db8
Parents: 26d8dec
Author: Karthik Kambatla ka...@apache.org
Authored: Thu Dec 4 17:36:32 2014 -0800
Committer: Karthik Kambatla ka...@apache.org
Committed: Thu Dec 4 17:36:32 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 hadoop-yarn-project/hadoop-yarn/bin/yarn|   5 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |   1 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  12 ++
 .../yarn/server/api/SCMAdminProtocol.java   |  53 ++
 .../yarn/server/api/SCMAdminProtocolPB.java |  31 
 .../RunSharedCacheCleanerTaskRequest.java   |  37 
 .../RunSharedCacheCleanerTaskResponse.java  |  58 ++
 .../main/proto/server/SCM_Admin_protocol.proto  |  29 +++
 .../src/main/proto/yarn_service_protos.proto|  11 ++
 .../org/apache/hadoop/yarn/client/SCMAdmin.java | 183 +++
 .../pb/client/SCMAdminProtocolPBClientImpl.java |  73 
 .../service/SCMAdminProtocolPBServiceImpl.java  |  57 ++
 .../RunSharedCacheCleanerTaskRequestPBImpl.java |  53 ++
 ...RunSharedCacheCleanerTaskResponsePBImpl.java |  66 +++
 .../src/main/resources/yarn-default.xml |  12 ++
 .../SCMAdminProtocolService.java| 146 +++
 .../sharedcachemanager/SharedCacheManager.java  |   8 +
 .../TestSCMAdminProtocolService.java| 135 ++
 19 files changed, 973 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/78968155/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f032b4f..252b7d5 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -54,6 +54,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2188. [YARN-1492] Client service for cache manager. 
 (Chris Trezzo and Sangjin Lee via kasha)
 
+YARN-2189. [YARN-1492] Admin service for cache manager.
+(Chris Trezzo via kasha)
+
 YARN-2765. Added leveldb-based implementation for RMStateStore. (Jason Lowe
 via jianhe)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/78968155/hadoop-yarn-project/hadoop-yarn/bin/yarn
--
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn
index b98f344..dfa27e4 100644
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn
@@ -36,6 +36,7 @@ function hadoop_usage
   echo   resourcemanager -format-state-store   deletes the RMStateStore
   echo   rmadmin   admin tools
   echo   sharedcachemanagerrun the SharedCacheManager 
daemon
+  echo   scmadmin  SharedCacheManager admin tools
   echo   timelineserverrun the timeline server
   echo   version   print the version
   echo  or
@@ -162,6 +163,10 @@ case ${COMMAND} in
 CLASS='org.apache.hadoop.yarn.server.sharedcachemanager.SharedCacheManager'
 YARN_OPTS=$YARN_OPTS $YARN_SHAREDCACHEMANAGER_OPTS
   ;;
+  scmadmin)
+CLASS='org.apache.hadoop.yarn.client.SCMAdmin'
+YARN_OPTS=$YARN_OPTS $YARN_CLIENT_OPTS
+  ;;
   version)
 CLASS=org.apache.hadoop.util.VersionInfo
 hadoop_debug Append YARN_CLIENT_OPTS onto YARN_OPTS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/78968155/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index 5e2278d..a763d39 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
@@ -97,6 +97,7 @@
   includeapplication_history_client.proto/include
   includeserver/application_history_server.proto/include
   includeclient_SCM_protocol.proto/include
+  includeserver/SCM_Admin_protocol.proto/include
 /includes
   /source
   
output${project.build.directory}/generated-sources/java/output


[15/41] hadoop git commit: HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos TGT is available in the subject. Contributed by Dian Fu.

2014-12-08 Thread kasha
HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if kerberos 
TGT is available in the subject. Contributed by Dian Fu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9d1a8f58
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9d1a8f58
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9d1a8f58

Branch: refs/heads/YARN-2139
Commit: 9d1a8f5897d585bec96de32116fbd2118f8e0f95
Parents: 73fbb3c
Author: Aaron T. Myers a...@apache.org
Authored: Wed Dec 3 18:53:45 2014 -0800
Committer: Aaron T. Myers a...@apache.org
Committed: Wed Dec 3 18:53:45 2014 -0800

--
 .../security/authentication/client/KerberosAuthenticator.java  | 6 +-
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 2 files changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d1a8f58/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
index e4ebf1b..928866c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
@@ -23,6 +23,8 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import javax.security.auth.Subject;
+import javax.security.auth.kerberos.KerberosKey;
+import javax.security.auth.kerberos.KerberosTicket;
 import javax.security.auth.login.AppConfigurationEntry;
 import javax.security.auth.login.Configuration;
 import javax.security.auth.login.LoginContext;
@@ -247,7 +249,9 @@ public class KerberosAuthenticator implements Authenticator 
{
 try {
   AccessControlContext context = AccessController.getContext();
   Subject subject = Subject.getSubject(context);
-  if (subject == null) {
+  if (subject == null
+  || (subject.getPrivateCredentials(KerberosKey.class).isEmpty()
+   
subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) {
 LOG.debug(No subject in context, logging in);
 subject = new Subject();
 LoginContext login = new LoginContext(, subject,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d1a8f58/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7a2159f..f53bceb 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -496,6 +496,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL
 and whitelist key ACL. (Dian Fu via wang)
 
+HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if
+kerberos TGT is available in the subject. (Dian Fu via atm)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[07/41] hadoop git commit: YARN-1156. Enhance NodeManager AllocatedGB and AvailableGB metrics for aggregation of decimal values. (Contributed by Tsuyoshi OZAWA)

2014-12-08 Thread kasha
YARN-1156. Enhance NodeManager AllocatedGB and AvailableGB metrics for 
aggregation of decimal values. (Contributed by Tsuyoshi OZAWA)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e65b7c5f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e65b7c5f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e65b7c5f

Branch: refs/heads/YARN-2139
Commit: e65b7c5ff6b0c013e510e750fe5cf59acfefea5f
Parents: 7caa3bc
Author: Junping Du junping...@apache.org
Authored: Wed Dec 3 04:11:18 2014 -0800
Committer: Junping Du junping...@apache.org
Committed: Wed Dec 3 04:11:18 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt  |  3 +++
 .../nodemanager/metrics/NodeManagerMetrics.java  | 19 ++-
 .../metrics/TestNodeManagerMetrics.java  | 17 -
 3 files changed, 29 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e65b7c5f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 1e336b7..421e5ea 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -112,6 +112,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2136. Changed RMStateStore to ignore store opearations when fenced.
 (Varun Saxena via jianhe)
 
+YARN-1156. Enhance NodeManager AllocatedGB and AvailableGB metrics 
+for aggregation of decimal values. (Tsuyoshi OZAWA via junping_du)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e65b7c5f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
index a3637d5..beaafe1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/metrics/NodeManagerMetrics.java
@@ -47,6 +47,9 @@ public class NodeManagerMetrics {
   @Metric(Container launch duration)
   MutableRate containerLaunchDuration;
 
+  private long allocatedMB;
+  private long availableMB;
+
   public static NodeManagerMetrics create() {
 return create(DefaultMetricsSystem.instance());
   }
@@ -92,22 +95,27 @@ public class NodeManagerMetrics {
 
   public void allocateContainer(Resource res) {
 allocatedContainers.incr();
-allocatedGB.incr(res.getMemory() / 1024);
-availableGB.decr(res.getMemory() / 1024);
+allocatedMB = allocatedMB + res.getMemory();
+allocatedGB.set((int)Math.ceil(allocatedMB/1024d));
+availableMB = availableMB - res.getMemory();
+availableGB.set((int)Math.floor(availableMB/1024d));
 allocatedVCores.incr(res.getVirtualCores());
 availableVCores.decr(res.getVirtualCores());
   }
 
   public void releaseContainer(Resource res) {
 allocatedContainers.decr();
-allocatedGB.decr(res.getMemory() / 1024);
-availableGB.incr(res.getMemory() / 1024);
+allocatedMB = allocatedMB - res.getMemory();
+allocatedGB.set((int)Math.ceil(allocatedMB/1024d));
+availableMB = availableMB + res.getMemory();
+availableGB.set((int)Math.floor(availableMB/1024d));
 allocatedVCores.decr(res.getVirtualCores());
 availableVCores.incr(res.getVirtualCores());
   }
 
   public void addResource(Resource res) {
-availableGB.incr(res.getMemory() / 1024);
+availableMB = availableMB + res.getMemory();
+availableGB.incr((int)Math.floor(availableMB/1024d));
 availableVCores.incr(res.getVirtualCores());
   }
 
@@ -118,4 +126,5 @@ public class NodeManagerMetrics {
   public int getRunningContainers() {
 return containersRunning.value();
   }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e65b7c5f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/metrics/TestNodeManagerMetrics.java
--
diff --git 

[29/41] hadoop git commit: YARN-2869. CapacityScheduler should trim sub queue names when parse configuration. Contributed by Wangda Tan

2014-12-08 Thread kasha
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e69af836
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e69af836
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e69af836

Branch: refs/heads/YARN-2139
Commit: e69af836f34f16fba565ab112c9bf0d367675b16
Parents: 475c6b4
Author: Jian He jia...@apache.org
Authored: Fri Dec 5 17:33:39 2014 -0800
Committer: Jian He jia...@apache.org
Committed: Fri Dec 5 17:33:39 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../CapacitySchedulerConfiguration.java |  10 +-
 .../scheduler/capacity/TestQueueParsing.java| 110 +++
 3 files changed, 122 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e69af836/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0b88959..0d7a843 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -194,6 +194,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in
 YarnConfiguration. (rchiang via rkanter)
 
+YARN-2869. CapacityScheduler should trim sub queue names when parse
+configuration. (Wangda Tan via jianhe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e69af836/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
index 0a49224..5bbb436 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
@@ -260,7 +260,7 @@ public class CapacitySchedulerConfiguration extends 
ReservationSchedulerConfigur
 }
   }
 
-  private String getQueuePrefix(String queue) {
+  static String getQueuePrefix(String queue) {
 String queueName = PREFIX + queue + DOT;
 return queueName;
   }
@@ -538,6 +538,14 @@ public class CapacitySchedulerConfiguration extends 
ReservationSchedulerConfigur
   public String[] getQueues(String queue) {
 LOG.debug(CSConf - getQueues called for: queuePrefix= + 
getQueuePrefix(queue));
 String[] queues = getStrings(getQueuePrefix(queue) + QUEUES);
+ListString trimmedQueueNames = new ArrayListString();
+if (null != queues) {
+  for (String s : queues) {
+trimmedQueueNames.add(s.trim());
+  }
+  queues = trimmedQueueNames.toArray(new String[0]);
+}
+ 
 LOG.debug(CSConf - getQueues: queuePrefix= + getQueuePrefix(queue) + 
 , queues= + ((queues == null) ?  : 
StringUtils.arrayToString(queues)));
 return queues;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e69af836/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
index cf2e5ce..5a9fbe1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
+++ 

[10/41] hadoop git commit: HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL and whitelist key ACL. Contributed by Dian Fu.

2014-12-08 Thread kasha
HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL and 
whitelist key ACL. Contributed by Dian Fu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1812241e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1812241e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1812241e

Branch: refs/heads/YARN-2139
Commit: 1812241ee10c0a98844bffb9341f770d54655f52
Parents: 03ab24a
Author: Andrew Wang w...@apache.org
Authored: Wed Dec 3 12:00:14 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Wed Dec 3 12:00:14 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../hadoop/crypto/key/kms/server/KMSACLs.java   | 26 ++--
 .../hadoop/crypto/key/kms/server/TestKMS.java   |  5 +++-
 3 files changed, 26 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1812241e/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2f17f22..7a2159f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -493,6 +493,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11344. KMS kms-config.sh sets a default value for the keystore
 password even in non-ssl setup. (Arun Suresh via wang)
 
+HADOOP-11342. KMS key ACL should ignore ALL operation for default key ACL
+and whitelist key ACL. (Dian Fu via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1812241e/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
index 0217589..c33dd4b 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSACLs.java
@@ -152,20 +152,30 @@ public class KMSACLs implements Runnable, KeyACLs {
 String confKey = KMSConfiguration.DEFAULT_KEY_ACL_PREFIX + keyOp;
 String aclStr = conf.get(confKey);
 if (aclStr != null) {
-  if (aclStr.equals(*)) {
-LOG.info(Default Key ACL for KEY_OP '{}' is set to '*', keyOp);
+  if (keyOp == KeyOpType.ALL) {
+// Ignore All operation for default key acl
+LOG.warn(Should not configure default key ACL for KEY_OP '{}', 
keyOp);
+  } else {
+if (aclStr.equals(*)) {
+  LOG.info(Default Key ACL for KEY_OP '{}' is set to '*', keyOp);
+}
+defaultKeyAcls.put(keyOp, new AccessControlList(aclStr));
   }
-  defaultKeyAcls.put(keyOp, new AccessControlList(aclStr));
 }
   }
   if (!whitelistKeyAcls.containsKey(keyOp)) {
 String confKey = KMSConfiguration.WHITELIST_KEY_ACL_PREFIX + keyOp;
 String aclStr = conf.get(confKey);
 if (aclStr != null) {
-  if (aclStr.equals(*)) {
-LOG.info(Whitelist Key ACL for KEY_OP '{}' is set to '*', keyOp);
+  if (keyOp == KeyOpType.ALL) {
+// Ignore All operation for whitelist key acl
+LOG.warn(Should not configure whitelist key ACL for KEY_OP '{}', 
keyOp);
+  } else {
+if (aclStr.equals(*)) {
+  LOG.info(Whitelist Key ACL for KEY_OP '{}' is set to '*', 
keyOp);
+}
+whitelistKeyAcls.put(keyOp, new AccessControlList(aclStr));
   }
-  whitelistKeyAcls.put(keyOp, new AccessControlList(aclStr));
 }
   }
 }
@@ -271,7 +281,9 @@ public class KMSACLs implements Runnable, KeyACLs {
 
   @Override
   public boolean isACLPresent(String keyName, KeyOpType opType) {
-return (keyAcls.containsKey(keyName) || 
defaultKeyAcls.containsKey(opType));
+return (keyAcls.containsKey(keyName)
+|| defaultKeyAcls.containsKey(opType)
+|| whitelistKeyAcls.containsKey(opType));
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1812241e/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
 

[05/41] hadoop git commit: HADOOP-11296. hadoop-daemons.sh throws 'host1: bash: host3: command not found...' (Contributed by Vinayakumar B)

2014-12-08 Thread kasha
HADOOP-11296. hadoop-daemons.sh throws 'host1: bash: host3: command not 
found...' (Contributed by Vinayakumar B)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/92ce6eda
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/92ce6eda
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/92ce6eda

Branch: refs/heads/YARN-2139
Commit: 92ce6eda920a1bab74df68c0badb4f06728fc177
Parents: 3d48ad7
Author: Vinayakumar B vinayakum...@apache.org
Authored: Wed Dec 3 10:07:43 2014 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Wed Dec 3 10:07:43 2014 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 .../hadoop-common/src/main/bin/hadoop-functions.sh | 6 ++
 2 files changed, 9 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/92ce6eda/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 10c6d76..2f17f22 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -355,6 +355,9 @@ Trunk (Unreleased)
 
 HADOOP-11298. slaves.sh and stop-all.sh are missing slashes (aw)
 
+HADOOP-11296. hadoop-daemons.sh throws 'host1: bash: host3: 
+command not found...' (vinayakumarb)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/92ce6eda/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index b5d4b1c..2b56634 100644
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -273,6 +273,12 @@ function hadoop_connect_to_hosts
 # moral of the story: just use pdsh.
 export -f hadoop_actual_ssh
 export HADOOP_SSH_OPTS
+
+# xargs is used with option -I to replace the placeholder in arguments
+# list with each hostname read from stdin/pipe. But it consider one 
+# line as one argument while reading from stdin/pipe. So place each 
+# hostname in different lines while passing via pipe.
+SLAVE_NAMES=$(echo $SLAVE_NAMES | tr ' ' '\n' )
 echo ${SLAVE_NAMES} | \
 xargs -n 1 -P${HADOOP_SSH_PARALLEL} \
 -I {} bash -c --  hadoop_actual_ssh {} ${params}



[33/41] hadoop git commit: HADOOP-11313. Adding a document about NativeLibraryChecker. Contributed by Tsuyoshi OZAWA.

2014-12-08 Thread kasha
HADOOP-11313. Adding a document about NativeLibraryChecker. Contributed by 
Tsuyoshi OZAWA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b3bb9e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b3bb9e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b3bb9e7

Branch: refs/heads/YARN-2139
Commit: 1b3bb9e7a33716c4d94786598b91a24a4b29fe67
Parents: 9297f98
Author: cnauroth cnaur...@apache.org
Authored: Sat Dec 6 20:12:31 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Sat Dec 6 20:12:31 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../src/site/apt/NativeLibraries.apt.vm   | 18 ++
 2 files changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b3bb9e7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 965c6d3..a626388 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -413,6 +413,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11356. Removed deprecated 
o.a.h.fs.permission.AccessControlException.
 (Li Lu via wheat9)
 
+HADOOP-11313. Adding a document about NativeLibraryChecker.
+(Tsuyoshi OZAWA via cnauroth)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b3bb9e7/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm 
b/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
index 49818af..866b428 100644
--- a/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
+++ b/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm
@@ -164,6 +164,24 @@ Native Libraries Guide
  * If something goes wrong, then:
INFO util.NativeCodeLoader - Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
 
+* Check
+
+   NativeLibraryChecker is a tool to check whether native libraries are loaded 
correctly.
+   You can launch NativeLibraryChecker as follows:
+
+
+   $ hadoop checknative -a
+   14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize 
native-bzip2 library system-native, will use pure-Java version
+   14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded  initialized 
native-zlib library
+   Native library checking:
+   hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
+   zlib:   true /lib/x86_64-linux-gnu/libz.so.1
+   snappy: true /usr/lib/libsnappy.so.1
+   lz4:true revision:99
+   bzip2:  false
+
+
+
 * Native Shared Libraries
 
You can load any native shared library using DistributedCache for



[39/41] hadoop git commit: HADOOP-11354. ThrottledInputStream doesn't perform effective throttling. Contributed by Ted Yu.

2014-12-08 Thread kasha
HADOOP-11354. ThrottledInputStream doesn't perform effective throttling. 
Contributed by Ted Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/57cb43be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/57cb43be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/57cb43be

Branch: refs/heads/YARN-2139
Commit: 57cb43be50c81daad8da34d33a45f396d9c1c35b
Parents: d555bb2
Author: Jing Zhao ji...@apache.org
Authored: Mon Dec 8 11:08:17 2014 -0800
Committer: Jing Zhao ji...@apache.org
Committed: Mon Dec 8 11:08:39 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/tools/util/ThrottledInputStream.java   | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/57cb43be/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 616842f..d496276 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -516,6 +516,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11343. Overflow is not properly handled in caclulating final iv for
 AES CTR. (Jerry Chen via wang)
 
+HADOOP-11354. ThrottledInputStream doesn't perform effective throttling.
+(Ted Yu via jing9)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/57cb43be/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
index f6fe118..d08a301 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
@@ -115,7 +115,7 @@ public class ThrottledInputStream extends InputStream {
   }
 
   private void throttle() throws IOException {
-if (getBytesPerSec()  maxBytesPerSec) {
+while (getBytesPerSec()  maxBytesPerSec) {
   try {
 Thread.sleep(SLEEP_DURATION_MS);
 totalSleepTime += SLEEP_DURATION_MS;



[34/41] hadoop git commit: YARN-2927. [YARN-1492] InMemorySCMStore properties are inconsistent. (Ray Chiang via kasha)

2014-12-08 Thread kasha
YARN-2927. [YARN-1492] InMemorySCMStore properties are inconsistent. (Ray 
Chiang via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/120e1dec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/120e1dec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/120e1dec

Branch: refs/heads/YARN-2139
Commit: 120e1decd7f6861e753269690d454cb14c240857
Parents: 1b3bb9e
Author: Karthik Kambatla ka...@apache.org
Authored: Sun Dec 7 22:28:26 2014 -0800
Committer: Karthik Kambatla ka...@apache.org
Committed: Sun Dec 7 22:28:26 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/120e1dec/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0d7a843..43b19ec 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -197,6 +197,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2869. CapacityScheduler should trim sub queue names when parse
 configuration. (Wangda Tan via jianhe)
 
+YARN-2927. [YARN-1492] InMemorySCMStore properties are inconsistent. 
+(Ray Chiang via kasha)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/120e1dec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 10ba832..55073c5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1416,7 +1416,7 @@ public class YarnConfiguration extends Configuration {
   // In-memory SCM store configuration
   
   public static final String IN_MEMORY_STORE_PREFIX =
-  SHARED_CACHE_PREFIX + in-memory.;
+  SCM_STORE_PREFIX + in-memory.;
 
   /**
* A resource in the InMemorySCMStore is considered stale if the time since



[41/41] hadoop git commit: HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. Contributed by Arun Suresh.

2014-12-08 Thread kasha
HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. Contributed by Arun 
Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ddffcd8f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ddffcd8f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ddffcd8f

Branch: refs/heads/YARN-2139
Commit: ddffcd8fac8af0ff78e63cca583af5c77a062891
Parents: 6c5bbd7
Author: Andrew Wang w...@apache.org
Authored: Mon Dec 8 13:44:44 2014 -0800
Committer: Andrew Wang w...@apache.org
Committed: Mon Dec 8 13:45:19 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt  |  2 ++
 .../hadoop-kms/src/main/conf/kms-env.sh  |  6 ++
 hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh| 11 ++-
 .../hadoop-kms/src/site/apt/index.apt.vm |  9 +
 4 files changed, 27 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index d496276..d9219cc 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -519,6 +519,8 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11354. ThrottledInputStream doesn't perform effective throttling.
 (Ted Yu via jing9)
 
+HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. (Arun Suresh 
via wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh 
b/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
index 88a2b86..44dfe6a 100644
--- a/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
+++ b/hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
@@ -47,3 +47,9 @@
 # The password of the SSL keystore if using SSL
 #
 # export KMS_SSL_KEYSTORE_PASS=password
+
+# The full path to any native libraries that need to be loaded
+# (For eg. location of natively compiled tomcat Apache portable
+# runtime (APR) libraries
+#
+# export JAVA_LIBRARY_PATH=${HOME}/lib/native

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
--
diff --git a/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh 
b/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
index 24a1f54..f6ef6a5 100644
--- a/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
+++ b/hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
@@ -31,7 +31,15 @@ BASEDIR=`cd ${BASEDIR}/..;pwd`
 
 KMS_SILENT=${KMS_SILENT:-true}
 
-source ${HADOOP_LIBEXEC_DIR:-${BASEDIR}/libexec}/kms-config.sh
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-${BASEDIR}/libexec}
+source ${HADOOP_LIBEXEC_DIR}/kms-config.sh
+
+
+if [ x$JAVA_LIBRARY_PATH = x ]; then
+  JAVA_LIBRARY_PATH=${HADOOP_LIBEXEC_DIR}/../lib/native/
+else
+  JAVA_LIBRARY_PATH=${HADOOP_LIBEXEC_DIR}/../lib/native/:${JAVA_LIBRARY_PATH}
+fi
 
 # The Java System property 'kms.http.port' it is not used by Kms,
 # it is used in Tomcat's server.xml configuration file
@@ -50,6 +58,7 @@ catalina_opts=${catalina_opts} 
-Dkms.admin.port=${KMS_ADMIN_PORT};
 catalina_opts=${catalina_opts} -Dkms.http.port=${KMS_HTTP_PORT};
 catalina_opts=${catalina_opts} -Dkms.max.threads=${KMS_MAX_THREADS};
 catalina_opts=${catalina_opts} 
-Dkms.ssl.keystore.file=${KMS_SSL_KEYSTORE_FILE};
+catalina_opts=${catalina_opts} -Djava.library.path=${JAVA_LIBRARY_PATH};
 
 print Adding to CATALINA_OPTS: ${catalina_opts}
 print Found KMS_SSL_KEYSTORE_PASS: `echo ${KMS_SSL_KEYSTORE_PASS} | sed 
's/./*/g'`

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddffcd8f/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
--
diff --git a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm 
b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
index 11b84d3..80d9a48 100644
--- a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
@@ -159,6 +159,15 @@ hadoop-${project.version} $ sbin/kms.sh start
   NOTE: You need to restart the KMS for the configuration changes to take
   effect.
 
+** Loading native libraries
+
+  The following environment variable (which can be set in KMS's
+  etc/hadoop/kms-env.sh script) can be used to specify the location
+  of any required native 

[26/41] hadoop git commit: YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration. (rchiang via rkanter)

2014-12-08 Thread kasha
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c72f54e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c72f54e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c72f54e

Branch: refs/heads/YARN-2139
Commit: 3c72f54ef581b4f3e2eb84e1e24e459c38d3f769
Parents: 9cdaec6
Author: Robert Kanter rkan...@apache.org
Authored: Fri Dec 5 12:07:01 2014 -0800
Committer: Robert Kanter rkan...@apache.org
Committed: Fri Dec 5 12:07:41 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c72f54e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 252b7d5..9804d61 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -189,6 +189,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2874. Dead lock in DelegationTokenRenewer which blocks RM to execute
 any further apps. (Naganarasimha G R via kasha)
 
+YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in
+YarnConfiguration. (rchiang via rkanter)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c72f54e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index f0f88d8..10ba832 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -819,7 +819,7 @@ public class YarnConfiguration extends Configuration {
   public static final String NM_CONTAINER_MON_PROCESS_TREE =
 NM_PREFIX + container-monitor.process-tree.class;
   public static final String PROCFS_USE_SMAPS_BASED_RSS_ENABLED = NM_PREFIX +
-  .container-monitor.procfs-tree.smaps-based-rss.enabled;
+  container-monitor.procfs-tree.smaps-based-rss.enabled;
   public static final boolean DEFAULT_PROCFS_USE_SMAPS_BASED_RSS_ENABLED =
   false;
   



[21/41] hadoop git commit: HDFS-7454. Reduce memory footprint for AclEntries in NameNode. Contributed by Vinayakumar B.

2014-12-08 Thread kasha
HDFS-7454. Reduce memory footprint for AclEntries in NameNode. Contributed by 
Vinayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0653918d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0653918d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0653918d

Branch: refs/heads/YARN-2139
Commit: 0653918dad855b394e8e3b8b3f512f474d872ee9
Parents: 7896815
Author: Haohui Mai whe...@apache.org
Authored: Thu Dec 4 20:49:45 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Thu Dec 4 20:49:45 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/namenode/AclEntryStatusFormat.java   | 136 +++
 .../hadoop/hdfs/server/namenode/AclFeature.java |  24 +++-
 .../hadoop/hdfs/server/namenode/AclStorage.java |  30 +++-
 .../server/namenode/FSImageFormatPBINode.java   |  22 +--
 .../server/namenode/FSPermissionChecker.java|  39 ++
 .../snapshot/FSImageFormatPBSnapshot.java   |  13 +-
 .../hdfs/server/namenode/FSAclBaseTest.java |   4 +-
 8 files changed, 223 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0653918d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 4432024..02f41cc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -432,6 +432,9 @@ Release 2.7.0 - UNRELEASED
 
   OPTIMIZATIONS
 
+HDFS-7454. Reduce memory footprint for AclEntries in NameNode.
+(Vinayakumar B via wheat9)
+
   BUG FIXES
 
 HDFS-6741. Improve permission denied message when

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0653918d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
new file mode 100644
index 000..82aa214
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.List;
+
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclEntryScope;
+import org.apache.hadoop.fs.permission.AclEntryType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.hdfs.util.LongBitFormat;
+
+import com.google.common.collect.ImmutableList;
+
+/**
+ * Class to pack an AclEntry into an integer. br
+ * An ACL entry is represented by a 32-bit integer in Big Endian format. br
+ * The bits can be divided in four segments: br
+ * [0:1) || [1:3) || [3:6) || [6:7) || [7:32) br
+ * br
+ * [0:1) -- the scope of the entry (AclEntryScope) br
+ * [1:3) -- the type of the entry (AclEntryType) br
+ * [3:6) -- the permission of the entry (FsAction) br
+ * [6:7) -- A flag to indicate whether Named entry or not br
+ * [7:32) -- the name of the entry, which is an ID that points to a br
+ * string in the StringTableSection. br
+ */
+public enum AclEntryStatusFormat {
+
+  SCOPE(null, 1),
+  TYPE(SCOPE.BITS, 2),
+  PERMISSION(TYPE.BITS, 3),
+  NAMED_ENTRY_CHECK(PERMISSION.BITS, 1),
+  NAME(NAMED_ENTRY_CHECK.BITS, 25);
+
+  private final LongBitFormat BITS;
+
+  private AclEntryStatusFormat(LongBitFormat previous, int length) {
+BITS = new LongBitFormat(name(), previous, length, 0);
+  }
+
+  static AclEntryScope getScope(int aclEntry) {
+int ordinal = (int) SCOPE.BITS.retrieve(aclEntry);
+return AclEntryScope.values()[ordinal];
+  }
+
+  static AclEntryType getType(int aclEntry) {
+int 

[17/41] hadoop git commit: HADOOP-11348. Remove unused variable from CMake error message for finding openssl (Dian Fu via Colin P. McCabe)

2014-12-08 Thread kasha
HADOOP-11348. Remove unused variable from CMake error message for finding 
openssl (Dian Fu via Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/565b0e60
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/565b0e60
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/565b0e60

Branch: refs/heads/YARN-2139
Commit: 565b0e60a8fc4ae5bc0083cc6a6ddb2d01952f32
Parents: 1bbcc3d
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Thu Dec 4 12:51:42 2014 -0800
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Thu Dec 4 12:52:39 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 hadoop-common-project/hadoop-common/src/CMakeLists.txt | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/565b0e60/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index f53bceb..f2a086e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -499,6 +499,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11332. KerberosAuthenticator#doSpnegoSequence should check if
 kerberos TGT is available in the subject. (Dian Fu via atm)
 
+HADOOP-11348. Remove unused variable from CMake error message for finding
+openssl (Dian Fu via Colin P. McCabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/565b0e60/hadoop-common-project/hadoop-common/src/CMakeLists.txt
--
diff --git a/hadoop-common-project/hadoop-common/src/CMakeLists.txt 
b/hadoop-common-project/hadoop-common/src/CMakeLists.txt
index b8ac460..29fa2b8 100644
--- a/hadoop-common-project/hadoop-common/src/CMakeLists.txt
+++ b/hadoop-common-project/hadoop-common/src/CMakeLists.txt
@@ -202,7 +202,7 @@ if (USABLE_OPENSSL)
 ${D}/crypto/OpensslCipher.c
 ${D}/crypto/random/OpensslSecureRandom.c)
 else (USABLE_OPENSSL)
-MESSAGE(Cannot find a usable OpenSSL library.  
OPENSSL_LIBRARY=${OPENSSL_LIBRARY}, OPENSSL_INCLUDE_DIR=${OPENSSL_INCLUDE_DIR}, 
CUSTOM_OPENSSL_INCLUDE_DIR=${CUSTOM_OPENSSL_INCLUDE_DIR}, 
CUSTOM_OPENSSL_PREFIX=${CUSTOM_OPENSSL_PREFIX}, 
CUSTOM_OPENSSL_INCLUDE=${CUSTOM_OPENSSL_INCLUDE})
+MESSAGE(Cannot find a usable OpenSSL library.  
OPENSSL_LIBRARY=${OPENSSL_LIBRARY}, OPENSSL_INCLUDE_DIR=${OPENSSL_INCLUDE_DIR}, 
CUSTOM_OPENSSL_LIB=${CUSTOM_OPENSSL_LIB}, 
CUSTOM_OPENSSL_PREFIX=${CUSTOM_OPENSSL_PREFIX}, 
CUSTOM_OPENSSL_INCLUDE=${CUSTOM_OPENSSL_INCLUDE})
 IF(REQUIRE_OPENSSL)
 MESSAGE(FATAL_ERROR Terminating build because require.openssl was 
specified.)
 ENDIF(REQUIRE_OPENSSL)



[14/41] hadoop git commit: YARN-2880. Added a test to make sure node labels will be recovered if RM restart is enabled. Contributed by Rohith Sharmaks

2014-12-08 Thread kasha
YARN-2880. Added a test to make sure node labels will be recovered if RM 
restart is enabled. Contributed by Rohith Sharmaks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/73fbb3c6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/73fbb3c6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/73fbb3c6

Branch: refs/heads/YARN-2139
Commit: 73fbb3c66b0d90abee49c766ee9d2f05517cb9de
Parents: a31e016
Author: Jian He jia...@apache.org
Authored: Wed Dec 3 17:14:52 2014 -0800
Committer: Jian He jia...@apache.org
Committed: Wed Dec 3 17:14:52 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../server/resourcemanager/TestRMRestart.java   | 91 
 2 files changed, 94 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/73fbb3c6/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 91151ad..30b9260 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -57,6 +57,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2765. Added leveldb-based implementation for RMStateStore. (Jason Lowe
 via jianhe)
 
+YARN-2880. Added a test to make sure node labels will be recovered
+if RM restart is enabled. (Rohith Sharmaks via jianhe)
+
   IMPROVEMENTS
 
 YARN-2891. Failed Container Executor does not provide a clear error

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73fbb3c6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
index a42170b..29f0208 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
@@ -69,6 +69,7 @@ import org.apache.hadoop.yarn.api.records.Container;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ContainerState;
 import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
+import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
@@ -82,6 +83,7 @@ import 
org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
 import org.apache.hadoop.yarn.server.api.records.NodeAction;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import 
org.apache.hadoop.yarn.server.resourcemanager.recovery.MemoryRMStateStore;
 import org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore;
 import 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.RMState;
@@ -105,6 +107,9 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Sets;
+
 public class TestRMRestart extends ParameterizedSchedulerTestBase {
   private final static File TEMP_DIR = new File(System.getProperty(
 test.build.data, /tmp), decommision);
@@ -2036,4 +2041,90 @@ public class TestRMRestart extends 
ParameterizedSchedulerTestBase {
 }
   }
 
+  // Test does following verification
+  // 1. Start RM1 with store patch /tmp
+  // 2. Add/remove/replace labels to cluster and node lable and verify
+  // 3. Start RM2 with store patch /tmp only
+  // 4. Get cluster and node lobel, it should be present by recovering it
+  @Test(timeout = 2)
+  public void testRMRestartRecoveringNodeLabelManager() throws Exception {
+MemoryRMStateStore memStore = new MemoryRMStateStore();
+memStore.init(conf);
+MockRM rm1 = new MockRM(conf, memStore) {
+  @Override
+  protected RMNodeLabelsManager createNodeLabelManager() {
+RMNodeLabelsManager mgr = new RMNodeLabelsManager();
+mgr.init(getConfig());
+return mgr;
+  }
+};

[40/41] hadoop git commit: HDFS-7486. Consolidate XAttr-related implementation into a single class. Contributed by Haohui Mai.

2014-12-08 Thread kasha
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6c5bbd7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6c5bbd7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6c5bbd7a

Branch: refs/heads/YARN-2139
Commit: 6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9
Parents: 57cb43b
Author: Haohui Mai whe...@apache.org
Authored: Mon Dec 8 11:52:21 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Mon Dec 8 11:52:21 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/namenode/EncryptionZoneManager.java  |   3 +-
 .../hdfs/server/namenode/FSDirXAttrOp.java  | 460 +++
 .../hdfs/server/namenode/FSDirectory.java   | 295 ++--
 .../hdfs/server/namenode/FSEditLogLoader.java   |  19 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 227 +
 .../hdfs/server/namenode/TestFSDirectory.java   |  47 +-
 7 files changed, 554 insertions(+), 500 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c5bbd7a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index fabb98f..55026a2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -444,6 +444,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7384. 'getfacl' command and 'getAclStatus' output should be in sync.
 (Vinayakumar B via cnauroth)
 
+HDFS-7486. Consolidate XAttr-related implementation into a single class.
+(wheat9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c5bbd7a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 135979f..faab1f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -311,7 +311,8 @@ public class EncryptionZoneManager {
 xattrs.add(ezXAttr);
 // updating the xattr will call addEncryptionZone,
 // done this way to handle edit log loading
-dir.unprotectedSetXAttrs(src, xattrs, EnumSet.of(XAttrSetFlag.CREATE));
+FSDirXAttrOp.unprotectedSetXAttrs(dir, src, xattrs,
+  EnumSet.of(XAttrSetFlag.CREATE));
 return ezXAttr;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6c5bbd7a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
new file mode 100644
index 000..303b9e3
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
@@ -0,0 +1,460 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.HadoopIllegalArgumentException;
+import org.apache.hadoop.fs.XAttr;
+import 

[32/41] hadoop git commit: HDFS-7476. Consolidate ACL-related operations to a single class. Contributed by Haohui Mai.

2014-12-08 Thread kasha
HDFS-7476. Consolidate ACL-related operations to a single class. Contributed by 
Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9297f980
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9297f980
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9297f980

Branch: refs/heads/YARN-2139
Commit: 9297f980c2de8886ff970946a2513e6890cd5552
Parents: e227fb8
Author: cnauroth cnaur...@apache.org
Authored: Sat Dec 6 14:20:00 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Sat Dec 6 14:20:00 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/server/namenode/AclStorage.java |  33 ---
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java | 244 +++
 .../hdfs/server/namenode/FSDirectory.java   | 158 ++--
 .../hdfs/server/namenode/FSEditLogLoader.java   |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 119 ++---
 .../hdfs/server/namenode/TestAuditLogger.java   |  79 ++
 7 files changed, 318 insertions(+), 320 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9297f980/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 87b02c4..769be43 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -438,6 +438,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7459. Consolidate cache-related implementation in FSNamesystem into
 a single class. (wheat9)
 
+HDFS-7476. Consolidate ACL-related operations to a single class.
+(wheat9 via cnauroth)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9297f980/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
index ac30597..a866046 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
@@ -241,39 +241,6 @@ final class AclStorage {
   }
 
   /**
-   * Completely removes the ACL from an inode.
-   *
-   * @param inode INode to update
-   * @param snapshotId int latest snapshot ID of inode
-   * @throws QuotaExceededException if quota limit is exceeded
-   */
-  public static void removeINodeAcl(INode inode, int snapshotId)
-  throws QuotaExceededException {
-AclFeature f = inode.getAclFeature();
-if (f == null) {
-  return;
-}
-
-FsPermission perm = inode.getFsPermission();
-ListAclEntry featureEntries = getEntriesFromAclFeature(f);
-if (featureEntries.get(0).getScope() == AclEntryScope.ACCESS) {
-  // Restore group permissions from the feature's entry to permission
-  // bits, overwriting the mask, which is not part of a minimal ACL.
-  AclEntry groupEntryKey = new AclEntry.Builder()
-  .setScope(AclEntryScope.ACCESS).setType(AclEntryType.GROUP).build();
-  int groupEntryIndex = Collections.binarySearch(featureEntries,
-  groupEntryKey, AclTransformation.ACL_ENTRY_COMPARATOR);
-  assert groupEntryIndex = 0;
-  FsAction groupPerm = featureEntries.get(groupEntryIndex).getPermission();
-  FsPermission newPerm = new FsPermission(perm.getUserAction(), groupPerm,
-  perm.getOtherAction(), perm.getStickyBit());
-  inode.setPermission(newPerm, snapshotId);
-}
-
-inode.removeAclFeature(snapshotId);
-  }
-
-  /**
* Updates an inode with a new ACL.  This method takes a full logical ACL and
* stores the entries to the inode's {@link FsPermission} and
* {@link AclFeature}.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9297f980/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
new file mode 100644
index 000..ac899aa
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
@@ -0,0 +1,244 @@
+/**
+ * Licensed to the Apache Software 

[37/41] hadoop git commit: HDFS-7384. getfacl command and getAclStatus output should be in sync. Contributed by Vinayakumar B.

2014-12-08 Thread kasha
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ffe942b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ffe942b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ffe942b8

Branch: refs/heads/YARN-2139
Commit: ffe942b82c1208bc7b22899da3a233944cb5ab52
Parents: 144da2e
Author: cnauroth cnaur...@apache.org
Authored: Mon Dec 8 10:23:09 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Mon Dec 8 10:23:09 2014 -0800

--
 .../apache/hadoop/fs/permission/AclEntry.java   |  4 +-
 .../apache/hadoop/fs/permission/AclStatus.java  | 79 +++-
 .../org/apache/hadoop/fs/shell/AclCommands.java | 32 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 19 +++--
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java |  4 +-
 .../tools/offlineImageViewer/FSImageLoader.java | 31 +++-
 .../org/apache/hadoop/hdfs/web/JsonUtil.java| 17 -
 .../hadoop-hdfs/src/main/proto/acl.proto|  1 +
 .../hadoop-hdfs/src/site/apt/WebHDFS.apt.vm |  1 +
 .../hdfs/server/namenode/FSAclBaseTest.java | 46 
 .../src/test/resources/testAclCLI.xml   | 53 +
 12 files changed, 246 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffe942b8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
index b65b7a0..b9def64 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
@@ -146,7 +146,9 @@ public class AclEntry {
  * @return Builder this builder, for call chaining
  */
 public Builder setName(String name) {
-  this.name = name;
+  if (name != null  !name.isEmpty()) {
+this.name = name;
+  }
   return this;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffe942b8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
index 4a7258f..9d7500a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
 import com.google.common.base.Objects;
+import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
 
 /**
@@ -36,6 +37,7 @@ public class AclStatus {
   private final String group;
   private final boolean stickyBit;
   private final ListAclEntry entries;
+  private final FsPermission permission;
 
   /**
* Returns the file owner.
@@ -73,6 +75,14 @@ public class AclStatus {
 return entries;
   }
 
+  /**
+   * Returns the permission set for the path
+   * @return {@link FsPermission} for the path
+   */
+  public FsPermission getPermission() {
+return permission;
+  }
+
   @Override
   public boolean equals(Object o) {
 if (o == null) {
@@ -113,6 +123,7 @@ public class AclStatus {
 private String group;
 private boolean stickyBit;
 private ListAclEntry entries = Lists.newArrayList();
+private FsPermission permission = null;
 
 /**
  * Sets the file owner.
@@ -173,12 +184,21 @@ public class AclStatus {
 }
 
 /**
+ * Sets the permission for the file.
+ * @param permission
+ */
+public Builder setPermission(FsPermission permission) {
+  this.permission = permission;
+  return this;
+}
+
+/**
  * Builds a new AclStatus populated with the set properties.
  *
  * @return AclStatus new AclStatus
  */
 public AclStatus build() {
-  return new AclStatus(owner, group, stickyBit, entries);
+  return new AclStatus(owner, group, stickyBit, entries, permission);
 }
   }
 
@@ -190,12 +210,67 @@ public class AclStatus {
* @param group String file group
* @param stickyBit the sticky bit
* @param entries 

[35/41] hadoop git commit: MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about ssl-client.xml. Contributed by Yangping Wu. (harsh)

2014-12-08 Thread kasha
MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about 
ssl-client.xml. Contributed by Yangping Wu. (harsh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8963515b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8963515b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8963515b

Branch: refs/heads/YARN-2139
Commit: 8963515b880b78068791f11abe4f5df332553be1
Parents: 120e1de
Author: Harsh J ha...@cloudera.com
Authored: Mon Dec 8 15:57:52 2014 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Mon Dec 8 15:57:52 2014 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/site/apt/EncryptedShuffle.apt.vm  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8963515b/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 3f34acd..c757d40 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -246,6 +246,9 @@ Release 2.7.0 - UNRELEASED
 
   BUG FIXES
 
+MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about
+ssl-client.xml (Yangping Wu via harsh)
+
 MAPREDUCE-5918. LineRecordReader can return the same decompressor to
 CodecPool multiple times (Sergey Murylev via raviprak)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8963515b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
index da412df..68e569d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
@@ -202,7 +202,7 @@ Hadoop MapReduce Next Generation - Encrypted Shuffle
 
 ** ssl-client.xml (Reducer/Fetcher) Configuration:
 
-  The mapred user should own the ssl-server.xml file and it should have
+  The mapred user should own the ssl-client.xml file and it should have
   default permissions.
 
 
*-+-+-+



[38/41] hadoop git commit: HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid. Contributed by Akira AJISAKA.

2014-12-08 Thread kasha
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d555bb21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d555bb21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d555bb21

Branch: refs/heads/YARN-2139
Commit: d555bb2120cb44d094546e6b6560926561876c10
Parents: ffe942b
Author: cnauroth cnaur...@apache.org
Authored: Mon Dec 8 11:04:29 2014 -0800
Committer: cnauroth cnaur...@apache.org
Committed: Mon Dec 8 11:04:29 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java  | 2 +-
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml   | 3 ++-
 3 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d555bb21/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7fcc8d2..fabb98f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -550,6 +550,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7472. Fix typo in message of ReplicaNotFoundException.
 (Masatake Iwasaki via wheat9)
 
+HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0
+is invalid. (Akira AJISAKA via cnauroth)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d555bb21/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 82741ce..aee79af 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -242,7 +242,7 @@ public class FSDirectory implements Closeable {
 Preconditions.checkArgument(
 maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, Cannot set 
 + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY
-+  to a value less than 0 or greater than  + MAX_DIR_ITEMS);
++  to a value less than 1 or greater than  + MAX_DIR_ITEMS);
 
 int threshold = conf.getInt(
 DFSConfigKeys.DFS_NAMENODE_NAME_CACHE_THRESHOLD_KEY,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d555bb21/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 06d7ba8..55a876e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -314,7 +314,8 @@
   namedfs.namenode.fs-limits.max-directory-items/name
   value1048576/value
   descriptionDefines the maximum number of items that a directory may
-  contain.  A value of 0 will disable the check./description
+  contain. Cannot set the property to a value less than 1 or more than
+  640./description
 /property
 
 property



[01/41] hadoop git commit: HDFS-7462. Consolidate implementation of mkdirs() into a single class. Contributed by Haohui Mai.

2014-12-08 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2139 52bcefca8 - ddffcd8fa


HDFS-7462. Consolidate implementation of mkdirs() into a single class. 
Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/185e0c7b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/185e0c7b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/185e0c7b

Branch: refs/heads/YARN-2139
Commit: 185e0c7b4c056b88f606362c71e4a22aae7076e0
Parents: 52bcefc
Author: Haohui Mai whe...@apache.org
Authored: Tue Dec 2 14:53:45 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Tue Dec 2 14:53:45 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hdfs/server/namenode/FSDirMkdirOp.java  | 238 ++
 .../hdfs/server/namenode/FSDirectory.java   | 111 +
 .../hdfs/server/namenode/FSEditLogLoader.java   |  10 +-
 .../hdfs/server/namenode/FSImageFormat.java |   6 +-
 .../server/namenode/FSImageFormatPBINode.java   |   4 +-
 .../server/namenode/FSImageSerialization.java   |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 244 ++-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |   8 +-
 .../apache/hadoop/hdfs/TestRenameWhileOpen.java |   2 +-
 .../hdfs/server/namenode/NameNodeAdapter.java   |   2 +-
 .../hdfs/server/namenode/TestEditLog.java   |   6 +-
 .../hdfs/server/namenode/TestEditLogRace.java   |   4 +-
 .../hdfs/server/namenode/TestINodeFile.java |  20 +-
 .../server/namenode/TestNameNodeRecovery.java   |   2 +-
 .../namenode/ha/TestEditLogsDuringFailover.java |   2 +-
 .../namenode/ha/TestStandbyCheckpoints.java |   2 +-
 17 files changed, 346 insertions(+), 320 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/185e0c7b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 196673e..4d2fb05 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -416,6 +416,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7438. Consolidate the implementation of rename() into a single class.
 (wheat9)
 
+HDFS-7462. Consolidate implementation of mkdirs() into a single class.
+(wheat9)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/185e0c7b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
new file mode 100644
index 000..01cb57f
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
@@ -0,0 +1,238 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.InvalidPathException;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.UnresolvedLinkException;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.fs.permission.PermissionStatus;
+import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.protocol.AclException;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
+import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
+import org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException;
+import org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot;
+
+import java.io.IOException;
+import java.util.List;
+
+import static org.apache.hadoop.util.Time.now;
+
+class 

[16/41] hadoop git commit: HDFS-7424. Add web UI for NFS gateway. Contributed by Brandon Li

2014-12-08 Thread kasha
HDFS-7424. Add web UI for NFS gateway. Contributed by Brandon Li


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1bbcc3d0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1bbcc3d0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1bbcc3d0

Branch: refs/heads/YARN-2139
Commit: 1bbcc3d0320b9435317bfeaa078af22d4de8d00c
Parents: 9d1a8f5
Author: Brandon Li brando...@apache.org
Authored: Thu Dec 4 10:46:26 2014 -0800
Committer: Brandon Li brando...@apache.org
Committed: Thu Dec 4 10:46:26 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml |   5 +
 .../hadoop/hdfs/nfs/conf/NfsConfigKeys.java |  10 ++
 .../hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java| 111 +++
 .../hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java|  24 +++-
 .../hdfs/nfs/nfs3/TestNfs3HttpServer.java   |  89 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   2 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   3 +
 7 files changed, 242 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bbcc3d0/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
index 42962a6..9a9d29c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
@@ -179,6 +179,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;
   artifactIdxmlenc/artifactId
   scopecompile/scope
 /dependency
+dependency
+  groupIdorg.bouncycastle/groupId
+  artifactIdbcprov-jdk16/artifactId
+  scopetest/scope
+/dependency
   /dependencies
 
   profiles

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bbcc3d0/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
index 178d855..7566791 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
@@ -60,4 +60,14 @@ public class NfsConfigKeys {
   
   public final static String LARGE_FILE_UPLOAD = nfs.large.file.upload;
   public final static boolean LARGE_FILE_UPLOAD_DEFAULT = true;
+  
+  public static final String NFS_HTTP_PORT_KEY = nfs.http.port;
+  public static final int NFS_HTTP_PORT_DEFAULT = 50079;
+  public static final String NFS_HTTP_ADDRESS_KEY = nfs.http.address;
+  public static final String NFS_HTTP_ADDRESS_DEFAULT = 0.0.0.0: + 
NFS_HTTP_PORT_DEFAULT;
+
+  public static final String NFS_HTTPS_PORT_KEY = nfs.https.port;
+  public static final int NFS_HTTPS_PORT_DEFAULT = 50579;
+  public static final String NFS_HTTPS_ADDRESS_KEY = nfs.https.address;
+  public static final String NFS_HTTPS_ADDRESS_DEFAULT = 0.0.0.0: + 
NFS_HTTPS_PORT_DEFAULT;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bbcc3d0/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
new file mode 100644
index 000..c37a21e
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3HttpServer.java
@@ -0,0 +1,111 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.nfs.nfs3;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import 

[11/41] hadoop git commit: HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs test to avoid confusion. Contributed by Yongjun Zhang

2014-12-08 Thread kasha
HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs test 
to avoid confusion. Contributed by Yongjun Zhang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a1e82259
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a1e82259
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a1e82259

Branch: refs/heads/YARN-2139
Commit: a1e822595c3dc5eadbd5430e57bc4691d09d5e68
Parents: 1812241
Author: Brandon Li brando...@apache.org
Authored: Wed Dec 3 13:31:26 2014 -0800
Committer: Brandon Li brando...@apache.org
Committed: Wed Dec 3 13:31:26 2014 -0800

--
 .../hadoop-hdfs-nfs/src/test/resources/core-site.xml  | 14 ++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  3 +++
 2 files changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1e82259/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml 
b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
index f90ca03..f400bf2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/resources/core-site.xml
@@ -20,10 +20,24 @@
 property
   namenfs.server.port/name
   value2079/value
+  description
+Specify the port number used by Hadoop NFS.
+Notice that the default value here is different than the default Hadoop nfs
+port 2049 specified in hdfs-default.xml. 2049 is also the default port for
+Linux nfs. The setting here allows starting Hadoop nfs for testing even if
+nfs server (linux or Hadoop) is aready running on he same host.
+  /description
 /property
 
 property
   namenfs.mountd.port/name
   value4272/value
+  description
+Specify the port number used by Hadoop mount daemon.
+Notice that the default value here is different than 4242 specified in 
+hdfs-default.xml. This setting allows starting Hadoop nfs mountd for
+testing even if the Linux or Hadoop mountd is already running on the
+same host.
+  /description
 /property
 /configuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1e82259/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1679a71..a244dab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -421,6 +421,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-6735. A minor optimization to avoid pread() be blocked by read()
 inside the same DFSInputStream (Lars Hofhansl via stack)
+
+HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs
+test to avoid confusion (Yongjun Zhang via brandonli)
 
   OPTIMIZATIONS
 



[31/41] hadoop git commit: HDFS-7459. Consolidate cache-related implementation in FSNamesystem into a single class. Contributed by Haohui Mai.

2014-12-08 Thread kasha
HDFS-7459. Consolidate cache-related implementation in FSNamesystem into a 
single class. Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e227fb8f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e227fb8f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e227fb8f

Branch: refs/heads/YARN-2139
Commit: e227fb8fbcd414717faded9454b8ef813f7aafea
Parents: 0707e4e
Author: Haohui Mai whe...@apache.org
Authored: Fri Dec 5 18:35:45 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Fri Dec 5 18:37:07 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hdfs/server/namenode/FSNDNCacheOp.java  | 124 
 .../hdfs/server/namenode/FSNamesystem.java  | 140 ++-
 3 files changed, 173 insertions(+), 94 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e227fb8f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d4db732..87b02c4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -435,6 +435,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7474. Avoid resolving path in FSPermissionChecker. (jing9)
 
+HDFS-7459. Consolidate cache-related implementation in FSNamesystem into
+a single class. (wheat9)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e227fb8f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
new file mode 100644
index 000..093ee74
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNDNCacheOp.java
@@ -0,0 +1,124 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedListEntries;
+import org.apache.hadoop.fs.CacheFlag;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
+import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
+import org.apache.hadoop.hdfs.protocol.CachePoolInfo;
+import org.apache.hadoop.security.AccessControlException;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+class FSNDNCacheOp {
+  static CacheDirectiveInfo addCacheDirective(
+  FSNamesystem fsn, CacheManager cacheManager,
+  CacheDirectiveInfo directive, EnumSetCacheFlag flags,
+  boolean logRetryCache)
+  throws IOException {
+
+final FSPermissionChecker pc = getFsPermissionChecker(fsn);
+
+if (directive.getId() != null) {
+  throw new IOException(addDirective: you cannot specify an ID  +
+  for this operation.);
+}
+CacheDirectiveInfo effectiveDirective =
+cacheManager.addDirective(directive, pc, flags);
+fsn.getEditLog().logAddCacheDirectiveInfo(effectiveDirective,
+logRetryCache);
+return effectiveDirective;
+  }
+
+  static void modifyCacheDirective(
+  FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
directive,
+  EnumSetCacheFlag flags, boolean logRetryCache) throws IOException {
+final FSPermissionChecker pc = getFsPermissionChecker(fsn);
+
+cacheManager.modifyDirective(directive, pc, flags);
+fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
+  }
+
+  static void removeCacheDirective(
+  FSNamesystem fsn, CacheManager cacheManager, long id,
+  boolean logRetryCache)
+  throws IOException {
+

[19/41] hadoop git commit: HDFS-7468. Moving verify* functions to corresponding classes. Contributed by Li Lu.

2014-12-08 Thread kasha
HDFS-7468. Moving verify* functions to corresponding classes. Contributed by Li 
Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26d8dec7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26d8dec7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26d8dec7

Branch: refs/heads/YARN-2139
Commit: 26d8dec756da1d9bd3df3b41a4dd5d8ff03bc5b2
Parents: 258623f
Author: Haohui Mai whe...@apache.org
Authored: Thu Dec 4 14:09:45 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Thu Dec 4 14:09:45 2014 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/server/namenode/FSDirRenameOp.java | 54 +--
 .../hdfs/server/namenode/FSDirSnapshotOp.java   | 20 +-
 .../hdfs/server/namenode/FSDirectory.java   | 72 ++--
 4 files changed, 78 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26d8dec7/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 2775285..4432024 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -427,6 +427,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7458. Add description to the nfs ports in core-site.xml used by nfs
 test to avoid confusion (Yongjun Zhang via brandonli)
 
+HDFS-7468. Moving verify* functions to corresponding classes.
+(Li Lu via wheat9)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26d8dec7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
index f371f05..08241c4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.FSLimitException;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
 import org.apache.hadoop.hdfs.protocol.SnapshotException;
@@ -73,6 +74,51 @@ class FSDirRenameOp {
   }
 
   /**
+   * Verify quota for rename operation where srcInodes[srcInodes.length-1] 
moves
+   * dstInodes[dstInodes.length-1]
+   */
+  static void verifyQuotaForRename(FSDirectory fsd,
+  INode[] src, INode[] dst)
+  throws QuotaExceededException {
+if (!fsd.getFSNamesystem().isImageLoaded() || fsd.shouldSkipQuotaChecks()) 
{
+  // Do not check quota if edits log is still being processed
+  return;
+}
+int i = 0;
+while(src[i] == dst[i]) { i++; }
+// src[i - 1] is the last common ancestor.
+
+final Quota.Counts delta = src[src.length - 1].computeQuotaUsage();
+
+// Reduce the required quota by dst that is being removed
+final int dstIndex = dst.length - 1;
+if (dst[dstIndex] != null) {
+  delta.subtract(dst[dstIndex].computeQuotaUsage());
+}
+FSDirectory.verifyQuota(dst, dstIndex, delta.get(Quota.NAMESPACE),
+delta.get(Quota.DISKSPACE), src[i - 1]);
+  }
+
+  /**
+   * Checks file system limits (max component length and max directory items)
+   * during a rename operation.
+   */
+  static void verifyFsLimitsForRename(FSDirectory fsd,
+  INodesInPath srcIIP,
+  INodesInPath dstIIP)
+  throws FSLimitException.PathComponentTooLongException,
+  FSLimitException.MaxDirectoryItemsExceededException {
+byte[] dstChildName = dstIIP.getLastLocalName();
+INode[] dstInodes = dstIIP.getINodes();
+int pos = dstInodes.length - 1;
+fsd.verifyMaxComponentLength(dstChildName, dstInodes, pos);
+// Do not enforce max directory items if renaming within same directory.
+if (srcIIP.getINode(-2) != dstIIP.getINode(-2)) {
+  fsd.verifyMaxDirItems(dstInodes, pos);
+}
+  }
+
+  /**
* Change a path name
*
* @param fsd FSDirectory
@@ -129,8 +175,8 @@ class FSDirRenameOp {
 
 fsd.ezManager.checkMoveValidity(srcIIP, dstIIP, src);
 // Ensure dst has quota to accommodate rename
-fsd.verifyFsLimitsForRename(srcIIP, dstIIP);
-fsd.verifyQuotaForRename(srcIIP.getINodes(), 

[36/41] hadoop git commit: HADOOP-10530 Make hadoop build on Java7+ only (stevel)

2014-12-08 Thread kasha
HADOOP-10530 Make hadoop build on Java7+ only (stevel)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/144da2e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/144da2e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/144da2e4

Branch: refs/heads/YARN-2139
Commit: 144da2e4656703751c48875b4ed34975d106edaa
Parents: 8963515
Author: Steve Loughran ste...@apache.org
Authored: Mon Dec 8 15:30:34 2014 +
Committer: Steve Loughran ste...@apache.org
Committed: Mon Dec 8 15:31:00 2014 +

--
 BUILDING.txt |  4 ++--
 hadoop-assemblies/pom.xml|  4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 17 -
 hadoop-common-project/hadoop-common/CHANGES.txt  |  2 ++
 hadoop-project/pom.xml   | 19 +++
 pom.xml  |  2 +-
 6 files changed, 22 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/BUILDING.txt
--
diff --git a/BUILDING.txt b/BUILDING.txt
index 06bef1f..94cbe5e 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -4,7 +4,7 @@ Build instructions for Hadoop
 Requirements:
 
 * Unix System
-* JDK 1.6+
+* JDK 1.7+
 * Maven 3.0 or later
 * Findbugs 1.3.9 (if running findbugs)
 * ProtocolBuffer 2.5.0
@@ -204,7 +204,7 @@ Building on Windows
 Requirements:
 
 * Windows System
-* JDK 1.6+
+* JDK 1.7+
 * Maven 3.0 or later
 * Findbugs 1.3.9 (if running findbugs)
 * ProtocolBuffer 2.5.0

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-assemblies/pom.xml
--
diff --git a/hadoop-assemblies/pom.xml b/hadoop-assemblies/pom.xml
index 66b6bdb..b53bacc 100644
--- a/hadoop-assemblies/pom.xml
+++ b/hadoop-assemblies/pom.xml
@@ -45,10 +45,10 @@
 configuration
   rules
 requireMavenVersion
-  version[3.0.0,)/version
+  version${enforced.maven.version}/version
 /requireMavenVersion
 requireJavaVersion
-  version1.6/version
+  version${enforced.java.version}/version
 /requireJavaVersion
   /rules
 /configuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-common-project/hadoop-annotations/pom.xml
--
diff --git a/hadoop-common-project/hadoop-annotations/pom.xml 
b/hadoop-common-project/hadoop-annotations/pom.xml
index 84a106e..c011b45 100644
--- a/hadoop-common-project/hadoop-annotations/pom.xml
+++ b/hadoop-common-project/hadoop-annotations/pom.xml
@@ -40,23 +40,6 @@
 
   profiles
 profile
-  idos.linux/id
-  activation
-os
-  family!Mac/family
-/os
-  /activation
-  dependencies
-dependency
-  groupIdjdk.tools/groupId
-  artifactIdjdk.tools/artifactId
-  version1.6/version
-  scopesystem/scope
-  systemPath${java.home}/../lib/tools.jar/systemPath
-/dependency
-  /dependencies
-/profile
-profile
   idjdk1.7/id
   activation
 jdk1.7/jdk

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a626388..616842f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -368,6 +368,8 @@ Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES
 
+HADOOP-10530 Make hadoop build on Java7+ only (stevel)
+
   NEW FEATURES
 
 HADOOP-10987. Provide an iterator-based listing API for FileSystem (kihwal)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/144da2e4/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index d3c404e..3b52dc3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -73,6 +73,17 @@
 zookeeper.version3.4.6/zookeeper.version
 
 tomcat.version6.0.41/tomcat.version
+
+!-- define the Java language version used by the compiler --
+javac.version1.7/javac.version
+
+!-- The java version enforced by the maven enforcer --
+!-- more complex patterns can be used here, such as
+   [${javac.version})
+for an open-ended enforcement
+--
+enforced.java.version[${javac.version},)/enforced.java.version
+enforced.maven.version[3.0.2,)/enforced.maven.version
   /properties
 
 

[09/41] hadoop git commit: MAPREDUCE-5932. Provide an option to use a dedicated reduce-side shuffle log. Contributed by Gera Shegalov

2014-12-08 Thread kasha
MAPREDUCE-5932. Provide an option to use a dedicated reduce-side shuffle log. 
Contributed by Gera Shegalov


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03ab24aa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03ab24aa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03ab24aa

Branch: refs/heads/YARN-2139
Commit: 03ab24aa01ffea1cacf1fa9cbbf73c3f2904d981
Parents: 22afae8
Author: Jason Lowe jl...@apache.org
Authored: Wed Dec 3 17:02:14 2014 +
Committer: Jason Lowe jl...@apache.org
Committed: Wed Dec 3 17:02:14 2014 +

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +
 .../apache/hadoop/mapred/MapReduceChildJVM.java | 34 +
 .../v2/app/job/impl/TestMapReduceChildJVM.java  | 71 -
 .../apache/hadoop/mapreduce/v2/util/MRApps.java | 80 +---
 .../apache/hadoop/mapred/FileOutputFormat.java  |  4 +-
 .../java/org/apache/hadoop/mapred/TaskLog.java  |  4 +
 .../apache/hadoop/mapreduce/MRJobConfig.java| 14 
 .../src/main/resources/mapred-default.xml   | 28 +++
 .../org/apache/hadoop/mapred/YARNRunner.java|  9 +--
 .../hadoop/yarn/ContainerLogAppender.java   | 11 ++-
 .../yarn/ContainerRollingLogAppender.java   | 11 ++-
 .../hadoop/yarn/TestContainerLogAppender.java   |  1 +
 .../main/resources/container-log4j.properties   | 29 ++-
 13 files changed, 243 insertions(+), 56 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03ab24aa/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 5417c3e..3f34acd 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -235,6 +235,9 @@ Release 2.7.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-5932. Provide an option to use a dedicated reduce-side shuffle
+log (Gera Shegalov via jlowe)
+
   OPTIMIZATIONS
 
 MAPREDUCE-6169. MergeQueue should release reference to the current item 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/03ab24aa/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
index c790c57..817b3a5 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
@@ -20,16 +20,14 @@ package org.apache.hadoop.mapred;
 
 import java.net.InetSocketAddress;
 import java.util.HashMap;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Vector;
 
-import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.mapred.TaskLog.LogName;
-import org.apache.hadoop.mapreduce.ID;
 import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.TypeConverter;
 import org.apache.hadoop.mapreduce.v2.util.MRApps;
 import org.apache.hadoop.yarn.api.ApplicationConstants;
 import org.apache.hadoop.yarn.api.ApplicationConstants.Environment;
@@ -52,20 +50,6 @@ public class MapReduceChildJVM {
 jobConf.get(JobConf.MAPRED_TASK_ENV));
   }
 
-  private static String getChildLogLevel(JobConf conf, boolean isMap) {
-if (isMap) {
-  return conf.get(
-  MRJobConfig.MAP_LOG_LEVEL, 
-  JobConf.DEFAULT_LOG_LEVEL.toString()
-  );
-} else {
-  return conf.get(
-  MRJobConfig.REDUCE_LOG_LEVEL, 
-  JobConf.DEFAULT_LOG_LEVEL.toString()
-  );
-}
-  }
-  
   public static void setVMEnv(MapString, String environment,
   Task task) {
 
@@ -79,7 +63,7 @@ public class MapReduceChildJVM {
 // streaming) it will have the correct loglevel.
 environment.put(
 HADOOP_ROOT_LOGGER, 
-getChildLogLevel(conf, task.isMapTask()) + ,console);
+MRApps.getChildLogLevel(conf, task.isMapTask()) + ,console);
 
 // TODO: The following is useful for instance in streaming tasks. Should be
 // set in ApplicationMaster's env by the RM.
@@ -147,15 +131,6 @@ public class MapReduceChildJVM {
 return adminClasspath +   + userClasspath;
   }
 
-  private static void setupLog4jProperties(Task task,
-  

[27/41] hadoop git commit: YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne

2014-12-08 Thread kasha
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b130821
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b130821
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b130821

Branch: refs/heads/YARN-2139
Commit: 4b130821995a3cfe20c71e38e0f63294085c0491
Parents: 3c72f54
Author: Jason Lowe jl...@apache.org
Authored: Fri Dec 5 21:06:48 2014 +
Committer: Jason Lowe jl...@apache.org
Committed: Fri Dec 5 21:06:48 2014 +

--
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../ProportionalCapacityPreemptionPolicy.java   | 170 +--
 ...estProportionalCapacityPreemptionPolicy.java | 283 ++-
 3 files changed, 424 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b130821/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 9804d61..0b88959 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -126,6 +126,8 @@ Release 2.7.0 - UNRELEASED
 
 YARN-2301. Improved yarn container command. (Naganarasimha G R via jianhe)
 
+YARN-2056. Disable preemption at Queue level (Eric Payne via jlowe)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b130821/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
index 0f48b0c..1a3f804 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
@@ -27,6 +27,7 @@ import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.NavigableSet;
+import java.util.PriorityQueue;
 import java.util.Set;
 
 import org.apache.commons.logging.Log;
@@ -111,6 +112,9 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
   public static final String NATURAL_TERMINATION_FACTOR =
   
yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor;
 
+  public static final String BASE_YARN_RM_PREEMPTION = 
yarn.scheduler.capacity.;
+  public static final String SUFFIX_DISABLE_PREEMPTION = .disable_preemption;
+
   // the dispatcher to send preempt and kill events
   public EventHandlerContainerPreemptEvent dispatcher;
 
@@ -192,7 +196,7 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
 // extract a summary of the queues from scheduler
 TempQueue tRoot;
 synchronized (scheduler) {
-  tRoot = cloneQueues(root, clusterResources);
+  tRoot = cloneQueues(root, clusterResources, false);
 }
 
 // compute the ideal distribution of resources among queues
@@ -370,28 +374,60 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
   private void computeFixpointAllocation(ResourceCalculator rc,
   Resource tot_guarant, CollectionTempQueue qAlloc, Resource unassigned, 
   boolean ignoreGuarantee) {
+// Prior to assigning the unused resources, process each queue as follows:
+// If current  guaranteed, idealAssigned = guaranteed + untouchable extra
+// Else idealAssigned = current;
+// Subtract idealAssigned resources from unassigned.
+// If the queue has all of its needs met (that is, if 
+// idealAssigned = current + pending), remove it from consideration.
+// Sort queues from most under-guaranteed to most over-guaranteed.
+TQComparator tqComparator = new TQComparator(rc, tot_guarant);
+PriorityQueueTempQueue orderedByNeed =
+ new PriorityQueueTempQueue(10,tqComparator);
+for (IteratorTempQueue i = qAlloc.iterator(); i.hasNext();) {
+  TempQueue q = i.next();
+  if 

hadoop git commit: HADOOP-11287. Simplify UGI#reloginFromKeytab for Java 7+. Contributed by Li Lu.

2014-12-08 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 46a736516 - e2c1ef4de


HADOOP-11287. Simplify UGI#reloginFromKeytab for Java 7+. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e2c1ef4d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e2c1ef4d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e2c1ef4d

Branch: refs/heads/branch-2
Commit: e2c1ef4debd291d5defc9ca527a085d83e44cc0a
Parents: 46a7365
Author: Haohui Mai whe...@apache.org
Authored: Mon Dec 8 21:10:32 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Mon Dec 8 21:10:49 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../hadoop/security/UserGroupInformation.java | 18 ++
 2 files changed, 5 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2c1ef4d/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e82e357..e842fe6 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -52,6 +52,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11313. Adding a document about NativeLibraryChecker.
 (Tsuyoshi OZAWA via cnauroth)
 
+HADOOP-11287. Simplify UGI#reloginFromKeytab for Java 7+.
+(Li Lu via wheat9)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2c1ef4d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index bb75ce8..65e4166 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -42,9 +42,9 @@ import java.util.Set;
 
 import javax.security.auth.Subject;
 import javax.security.auth.callback.CallbackHandler;
-import javax.security.auth.kerberos.KerberosKey;
 import javax.security.auth.kerberos.KerberosPrincipal;
 import javax.security.auth.kerberos.KerberosTicket;
+import javax.security.auth.kerberos.KeyTab;
 import javax.security.auth.login.AppConfigurationEntry;
 import javax.security.auth.login.AppConfigurationEntry.LoginModuleControlFlag;
 import javax.security.auth.login.LoginContext;
@@ -598,20 +598,6 @@ public class UserGroupInformation {
 user.setLogin(login);
   }
 
-  private static Class? KEY_TAB_CLASS = KerberosKey.class;
-  static {
-try {
-  // We use KEY_TAB_CLASS to determine if the UGI is logged in from
-  // keytab. In JDK6 and JDK7, if useKeyTab and storeKey are specified
-  // in the Krb5LoginModule, then some number of KerberosKey objects
-  // are added to the Subject's private credentials. However, in JDK8,
-  // a KeyTab object is added instead. More details in HADOOP-10786.
-  KEY_TAB_CLASS = Class.forName(javax.security.auth.kerberos.KeyTab);
-} catch (ClassNotFoundException cnfe) {
-  // Ignore. javax.security.auth.kerberos.KeyTab does not exist in JDK6.
-}
-  }
-
   /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
@@ -620,7 +606,7 @@ public class UserGroupInformation {
   UserGroupInformation(Subject subject) {
 this.subject = subject;
 this.user = subject.getPrincipals(User.class).iterator().next();
-this.isKeytab = !subject.getPrivateCredentials(KEY_TAB_CLASS).isEmpty();
+this.isKeytab = !subject.getPrivateCredentials(KeyTab.class).isEmpty();
 this.isKrbTkt = 
!subject.getPrivateCredentials(KerberosTicket.class).isEmpty();
   }
   



hadoop git commit: HADOOP-11287. Simplify UGI#reloginFromKeytab for Java 7+. Contributed by Li Lu.

2014-12-08 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk ddffcd8fa - 0ee41612b


HADOOP-11287. Simplify UGI#reloginFromKeytab for Java 7+. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0ee41612
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0ee41612
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0ee41612

Branch: refs/heads/trunk
Commit: 0ee41612bb237331fc7130a6fb8b5e3366fcc221
Parents: ddffcd8
Author: Haohui Mai whe...@apache.org
Authored: Mon Dec 8 21:10:32 2014 -0800
Committer: Haohui Mai whe...@apache.org
Committed: Mon Dec 8 21:10:32 2014 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../hadoop/security/UserGroupInformation.java | 18 ++
 2 files changed, 5 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0ee41612/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index d9219cc..4b998d0 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -418,6 +418,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11313. Adding a document about NativeLibraryChecker.
 (Tsuyoshi OZAWA via cnauroth)
 
+HADOOP-11287. Simplify UGI#reloginFromKeytab for Java 7+.
+(Li Lu via wheat9)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0ee41612/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 0541f9d..4b0b5f3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -44,9 +44,9 @@ import java.util.Set;
 
 import javax.security.auth.Subject;
 import javax.security.auth.callback.CallbackHandler;
-import javax.security.auth.kerberos.KerberosKey;
 import javax.security.auth.kerberos.KerberosPrincipal;
 import javax.security.auth.kerberos.KerberosTicket;
+import javax.security.auth.kerberos.KeyTab;
 import javax.security.auth.login.AppConfigurationEntry;
 import javax.security.auth.login.AppConfigurationEntry.LoginModuleControlFlag;
 import javax.security.auth.login.LoginContext;
@@ -610,20 +610,6 @@ public class UserGroupInformation {
 user.setLogin(login);
   }
 
-  private static Class? KEY_TAB_CLASS = KerberosKey.class;
-  static {
-try {
-  // We use KEY_TAB_CLASS to determine if the UGI is logged in from
-  // keytab. In JDK6 and JDK7, if useKeyTab and storeKey are specified
-  // in the Krb5LoginModule, then some number of KerberosKey objects
-  // are added to the Subject's private credentials. However, in JDK8,
-  // a KeyTab object is added instead. More details in HADOOP-10786.
-  KEY_TAB_CLASS = Class.forName(javax.security.auth.kerberos.KeyTab);
-} catch (ClassNotFoundException cnfe) {
-  // Ignore. javax.security.auth.kerberos.KeyTab does not exist in JDK6.
-}
-  }
-
   /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
@@ -632,7 +618,7 @@ public class UserGroupInformation {
   UserGroupInformation(Subject subject) {
 this.subject = subject;
 this.user = subject.getPrincipals(User.class).iterator().next();
-this.isKeytab = !subject.getPrivateCredentials(KEY_TAB_CLASS).isEmpty();
+this.isKeytab = !subject.getPrivateCredentials(KeyTab.class).isEmpty();
 this.isKrbTkt = 
!subject.getPrivateCredentials(KerberosTicket.class).isEmpty();
   }
   



hadoop git commit: YARN-2931. PublicLocalizer may fail until directory is initialized by LocalizeRunner. (Anubhav Dhoot via kasha)

2014-12-08 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0ee41612b - db73cc912


YARN-2931. PublicLocalizer may fail until directory is initialized by 
LocalizeRunner. (Anubhav Dhoot via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db73cc91
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db73cc91
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db73cc91

Branch: refs/heads/trunk
Commit: db73cc9124bb8511c1626ba40d3fad81e980e44f
Parents: 0ee4161
Author: Karthik Kambatla ka...@apache.org
Authored: Mon Dec 8 22:18:32 2014 -0800
Committer: Karthik Kambatla ka...@apache.org
Committed: Mon Dec 8 22:26:18 2014 -0800

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../localizer/ResourceLocalizationService.java  |   6 +
 .../TestResourceLocalizationService.java| 110 ++-
 3 files changed, 113 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db73cc91/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 43b19ec..d06c831 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -200,6 +200,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2927. [YARN-1492] InMemorySCMStore properties are inconsistent. 
 (Ray Chiang via kasha)
 
+YARN-2931. PublicLocalizer may fail until directory is initialized by
+LocalizeRunner. (Anubhav Dhoot via kasha)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db73cc91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
index f4b6221..5440980 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
@@ -775,6 +775,12 @@ public class ResourceLocalizationService extends 
CompositeService
 if (!publicDirDestPath.getParent().equals(publicRootPath)) {
   DiskChecker.checkDir(new 
File(publicDirDestPath.toUri().getPath()));
 }
+
+// In case this is not a newly initialized nm state, ensure
+// initialized local/log dirs similar to LocalizerRunner
+getInitializedLocalDirs();
+getInitializedLogDirs();
+
 // explicitly synchronize pending here to avoid future task
 // completing and being dequeued before pending updated
 synchronized (pending) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db73cc91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
index 1051e7a..f968bb9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
@@ -32,18 +32,16 @@ import static org.mockito.Matchers.eq;
 import static org.mockito.Matchers.isA;
 import static org.mockito.Matchers.isNull;
 import static