hadoop git commit: MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval. (Haibo Chen via kasha)

2016-09-21 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1b91ebb71 -> a0b076785


MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least 
mapreduce.task.progress-report.interval. (Haibo Chen via kasha)

(cherry picked from commit 537095d13cd38212ed162e0a360bdd9a8bd83498)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a0b07678
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a0b07678
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a0b07678

Branch: refs/heads/branch-2
Commit: a0b07678568a4bdae19ad6245ba5d4d21324dccc
Parents: 1b91ebb
Author: Karthik Kambatla 
Authored: Wed Sep 21 18:30:11 2016 -0700
Committer: Karthik Kambatla 
Committed: Wed Sep 21 18:30:42 2016 -0700

--
 .../mapreduce/v2/app/TaskHeartbeatHandler.java  | 24 ++-
 .../v2/app/TestTaskHeartbeatHandler.java| 67 
 .../java/org/apache/hadoop/mapred/Task.java |  8 ++-
 .../apache/hadoop/mapreduce/MRJobConfig.java|  9 ++-
 .../hadoop/mapreduce/util/MRJobConfUtil.java| 16 +
 5 files changed, 113 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0b07678/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
index 303b4c1..6a716c7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
@@ -23,10 +23,12 @@ import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.util.MRJobConfUtil;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
 import 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdateEvent;
 import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent;
@@ -67,7 +69,7 @@ public class TaskHeartbeatHandler extends AbstractService {
   //received from a task.
   private Thread lostTaskCheckerThread;
   private volatile boolean stopped;
-  private int taskTimeOut = 5 * 60 * 1000;// 5 mins
+  private long taskTimeOut;
   private int taskTimeOutCheckInterval = 30 * 1000; // 30 seconds.
 
   private final EventHandler eventHandler;
@@ -87,7 +89,19 @@ public class TaskHeartbeatHandler extends AbstractService {
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
 super.serviceInit(conf);
-taskTimeOut = conf.getInt(MRJobConfig.TASK_TIMEOUT, 5 * 60 * 1000);
+taskTimeOut = conf.getLong(
+MRJobConfig.TASK_TIMEOUT, MRJobConfig.DEFAULT_TASK_TIMEOUT_MILLIS);
+
+// enforce task timeout is at least twice as long as task report interval
+long taskProgressReportIntervalMillis = MRJobConfUtil.
+getTaskProgressReportInterval(conf);
+long minimumTaskTimeoutAllowed = taskProgressReportIntervalMillis * 2;
+if(taskTimeOut < minimumTaskTimeoutAllowed) {
+  taskTimeOut = minimumTaskTimeoutAllowed;
+  LOG.info("Task timeout must be as least twice as long as the task " +
+  "status report interval. Setting task timeout to " + taskTimeOut);
+}
+
 taskTimeOutCheckInterval =
 conf.getInt(MRJobConfig.TASK_TIMEOUT_CHECK_INTERVAL_MS, 30 * 1000);
   }
@@ -140,7 +154,7 @@ public class TaskHeartbeatHandler extends AbstractService {
 
 while (iterator.hasNext()) {
   Map.Entry entry = iterator.next();
-  boolean taskTimedOut = (taskTimeOut > 0) && 
+  boolean taskTimedOut = (taskTimeOut > 0) &&
   (currentTime > (entry.getValue().getLastProgress() + 
taskTimeOut));

   if(taskTimedOut) {
@@ -163,4 +177,8 @@ public class TaskHeartbeatHandler extends AbstractService {
 }
   }
 
+  @VisibleForTesting
+  public long getTaskTimeOut() {
+return taskTimeOut;
+  }
 }


hadoop git commit: MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval. (Haibo Chen via kasha)

2016-09-21 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk 964e546ab -> 537095d13


MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least 
mapreduce.task.progress-report.interval. (Haibo Chen via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/537095d1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/537095d1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/537095d1

Branch: refs/heads/trunk
Commit: 537095d13cd38212ed162e0a360bdd9a8bd83498
Parents: 964e546
Author: Karthik Kambatla 
Authored: Wed Sep 21 18:30:11 2016 -0700
Committer: Karthik Kambatla 
Committed: Wed Sep 21 18:30:11 2016 -0700

--
 .../mapreduce/v2/app/TaskHeartbeatHandler.java  | 24 ++-
 .../v2/app/TestTaskHeartbeatHandler.java| 67 
 .../java/org/apache/hadoop/mapred/Task.java |  8 ++-
 .../apache/hadoop/mapreduce/MRJobConfig.java|  9 ++-
 .../hadoop/mapreduce/util/MRJobConfUtil.java| 16 +
 5 files changed, 113 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/537095d1/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
index 303b4c1..6a716c7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
@@ -23,10 +23,12 @@ import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.util.MRJobConfUtil;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
 import 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdateEvent;
 import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent;
@@ -67,7 +69,7 @@ public class TaskHeartbeatHandler extends AbstractService {
   //received from a task.
   private Thread lostTaskCheckerThread;
   private volatile boolean stopped;
-  private int taskTimeOut = 5 * 60 * 1000;// 5 mins
+  private long taskTimeOut;
   private int taskTimeOutCheckInterval = 30 * 1000; // 30 seconds.
 
   private final EventHandler eventHandler;
@@ -87,7 +89,19 @@ public class TaskHeartbeatHandler extends AbstractService {
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
 super.serviceInit(conf);
-taskTimeOut = conf.getInt(MRJobConfig.TASK_TIMEOUT, 5 * 60 * 1000);
+taskTimeOut = conf.getLong(
+MRJobConfig.TASK_TIMEOUT, MRJobConfig.DEFAULT_TASK_TIMEOUT_MILLIS);
+
+// enforce task timeout is at least twice as long as task report interval
+long taskProgressReportIntervalMillis = MRJobConfUtil.
+getTaskProgressReportInterval(conf);
+long minimumTaskTimeoutAllowed = taskProgressReportIntervalMillis * 2;
+if(taskTimeOut < minimumTaskTimeoutAllowed) {
+  taskTimeOut = minimumTaskTimeoutAllowed;
+  LOG.info("Task timeout must be as least twice as long as the task " +
+  "status report interval. Setting task timeout to " + taskTimeOut);
+}
+
 taskTimeOutCheckInterval =
 conf.getInt(MRJobConfig.TASK_TIMEOUT_CHECK_INTERVAL_MS, 30 * 1000);
   }
@@ -140,7 +154,7 @@ public class TaskHeartbeatHandler extends AbstractService {
 
 while (iterator.hasNext()) {
   Map.Entry entry = iterator.next();
-  boolean taskTimedOut = (taskTimeOut > 0) && 
+  boolean taskTimedOut = (taskTimeOut > 0) &&
   (currentTime > (entry.getValue().getLastProgress() + 
taskTimeOut));

   if(taskTimedOut) {
@@ -163,4 +177,8 @@ public class TaskHeartbeatHandler extends AbstractService {
 }
   }
 
+  @VisibleForTesting
+  public long getTaskTimeOut() {
+return taskTimeOut;
+  }
 }


[07/13] hadoop git commit: HADOOP-13601. Fix a log message typo in AbstractDelegationTokenSecretManager. Contributed by Mehran Hassani.

2016-09-21 Thread drankye
HADOOP-13601. Fix a log message typo in AbstractDelegationTokenSecretManager. 
Contributed by Mehran Hassani.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e80386d6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e80386d6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e80386d6

Branch: refs/heads/HADOOP-12756
Commit: e80386d69d5fb6a08aa3366e42d2518747af569f
Parents: 9f03b40
Author: Mingliang Liu 
Authored: Tue Sep 20 13:19:44 2016 -0700
Committer: Mingliang Liu 
Committed: Tue Sep 20 13:20:01 2016 -0700

--
 .../token/delegation/AbstractDelegationTokenSecretManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e80386d6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
index 1d7f2f5..cc2efc9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
@@ -528,7 +528,7 @@ extends AbstractDelegationTokenIdentifier>
 DataInputStream in = new DataInputStream(buf);
 TokenIdent id = createIdentifier();
 id.readFields(in);
-LOG.info("Token cancelation requested for identifier: "+id);
+LOG.info("Token cancellation requested for identifier: " + id);
 
 if (id.getUser() == null) {
   throw new InvalidToken("Token with no owner");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/13] hadoop git commit: YARN-4591. YARN Web UIs should provide a robots.txt. (Sidharta Seethana via wangda)

2016-09-21 Thread drankye
YARN-4591. YARN Web UIs should provide a robots.txt. (Sidharta Seethana via 
wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5a58bfee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5a58bfee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5a58bfee

Branch: refs/heads/HADOOP-12756
Commit: 5a58bfee30a662b1b556048504f66f9cf00d182a
Parents: 0e918df
Author: Wangda Tan 
Authored: Tue Sep 20 17:20:50 2016 -0700
Committer: Wangda Tan 
Committed: Tue Sep 20 17:20:50 2016 -0700

--
 .../apache/hadoop/yarn/webapp/Dispatcher.java   |  9 +
 .../org/apache/hadoop/yarn/webapp/WebApp.java   |  4 +-
 .../hadoop/yarn/webapp/view/RobotsTextPage.java | 39 
 .../apache/hadoop/yarn/webapp/TestWebApp.java   | 26 +
 4 files changed, 77 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a58bfee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
index 66dd21b..d519dbb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.http.HtmlQuoting;
 import org.apache.hadoop.yarn.webapp.Controller.RequestContext;
 import org.apache.hadoop.yarn.webapp.Router.Dest;
 import org.apache.hadoop.yarn.webapp.view.ErrorPage;
+import org.apache.hadoop.yarn.webapp.view.RobotsTextPage;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -117,6 +118,14 @@ public class Dispatcher extends HttpServlet {
 }
 Controller.RequestContext rc =
 injector.getInstance(Controller.RequestContext.class);
+
+//short-circuit robots.txt serving for all YARN webapps.
+if (uri.equals(RobotsTextPage.ROBOTS_TXT_PATH)) {
+  rc.setStatus(HttpServletResponse.SC_FOUND);
+  render(RobotsTextPage.class);
+  return;
+}
+
 if (setCookieParams(rc, req) > 0) {
   Cookie ec = rc.cookies().get(ERROR_COOKIE);
   if (ec != null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a58bfee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
index 2c21d1b..fe800f0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
@@ -29,6 +29,7 @@ import java.util.Map;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.http.HttpServer2;
+import org.apache.hadoop.yarn.webapp.view.RobotsTextPage;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -158,7 +159,8 @@ public abstract class WebApp extends ServletModule {
   public void configureServlets() {
 setup();
 
-serve("/", "/__stop").with(Dispatcher.class);
+serve("/", "/__stop", RobotsTextPage.ROBOTS_TXT_PATH)
+.with(Dispatcher.class);
 
 for (String path : this.servePathSpecs) {
   serve(path).with(Dispatcher.class);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a58bfee/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
new file mode 100644
index 000..b15d492
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
@@ -0,0 +1,39 @@
+/*
+ * *
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional 

[08/13] hadoop git commit: HDFS-10879. TestEncryptionZonesWithKMS#testReadWrite fails intermittently. Contributed by Xiao Chen.

2016-09-21 Thread drankye
HDFS-10879. TestEncryptionZonesWithKMS#testReadWrite fails intermittently. 
Contributed by Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e918dff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e918dff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e918dff

Branch: refs/heads/HADOOP-12756
Commit: 0e918dff594e9ba5434fdee7fc1f6394b62b32cd
Parents: e80386d
Author: Xiao Chen 
Authored: Tue Sep 20 16:52:05 2016 -0700
Committer: Xiao Chen 
Committed: Tue Sep 20 16:56:52 2016 -0700

--
 .../apache/hadoop/hdfs/TestEncryptionZones.java | 23 +++-
 1 file changed, 22 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e918dff/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
index b634dd2..9168ca6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
@@ -45,7 +45,9 @@ import org.apache.hadoop.crypto.CipherSuite;
 import org.apache.hadoop.crypto.CryptoProtocolVersion;
 import org.apache.hadoop.crypto.key.JavaKeyStoreProvider;
 import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
 import org.apache.hadoop.crypto.key.KeyProviderFactory;
+import 
org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.fs.CreateFlag;
 import org.apache.hadoop.fs.FSDataOutputStream;
@@ -734,14 +736,33 @@ public class TestEncryptionZones {
 // Roll the key of the encryption zone
 assertNumZones(1);
 String keyName = dfsAdmin.listEncryptionZones().next().getKeyName();
+FileEncryptionInfo feInfo1 = getFileEncryptionInfo(encFile1);
 cluster.getNamesystem().getProvider().rollNewVersion(keyName);
+/**
+ * due to the cache on the server side, client may get old keys.
+ * @see EagerKeyGeneratorKeyProviderCryptoExtension#rollNewVersion(String)
+ */
+boolean rollSucceeded = false;
+for (int i = 0; i <= EagerKeyGeneratorKeyProviderCryptoExtension
+.KMS_KEY_CACHE_SIZE_DEFAULT + CommonConfigurationKeysPublic.
+KMS_CLIENT_ENC_KEY_CACHE_SIZE_DEFAULT; ++i) {
+  KeyProviderCryptoExtension.EncryptedKeyVersion ekv2 =
+  cluster.getNamesystem().getProvider().generateEncryptedKey(TEST_KEY);
+  if (!(feInfo1.getEzKeyVersionName()
+  .equals(ekv2.getEncryptionKeyVersionName( {
+rollSucceeded = true;
+break;
+  }
+}
+Assert.assertTrue("rollover did not generate a new key even after"
++ " queue is drained", rollSucceeded);
+
 // Read them back in and compare byte-by-byte
 verifyFilesEqual(fs, baseFile, encFile1, len);
 // Write a new enc file and validate
 final Path encFile2 = new Path(zone, "myfile2");
 DFSTestUtil.createFile(fs, encFile2, len, (short) 1, 0xFEED);
 // FEInfos should be different
-FileEncryptionInfo feInfo1 = getFileEncryptionInfo(encFile1);
 FileEncryptionInfo feInfo2 = getFileEncryptionInfo(encFile2);
 assertFalse("EDEKs should be different", Arrays
 .equals(feInfo1.getEncryptedDataEncryptionKey(),


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[11/13] hadoop git commit: HDFS-10861. Refactor StripeReaders and use ECChunk version decode API. Contributed by Sammi Chen

2016-09-21 Thread drankye
HDFS-10861. Refactor StripeReaders and use ECChunk version decode API. 
Contributed by Sammi Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/734d54c1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/734d54c1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/734d54c1

Branch: refs/heads/HADOOP-12756
Commit: 734d54c1a8950446e68098f62d8964e02ecc2890
Parents: 2b66d9e
Author: Kai Zheng 
Authored: Wed Sep 21 21:34:48 2016 +0800
Committer: Kai Zheng 
Committed: Wed Sep 21 21:34:48 2016 +0800

--
 .../apache/hadoop/io/ElasticByteBufferPool.java |   2 +-
 .../apache/hadoop/io/erasurecode/ECChunk.java   |  22 +
 .../io/erasurecode/rawcoder/CoderUtil.java  |   3 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  20 +-
 .../hadoop/hdfs/DFSStripedInputStream.java  | 654 +++
 .../hadoop/hdfs/PositionStripeReader.java   | 104 +++
 .../hadoop/hdfs/StatefulStripeReader.java   |  95 +++
 .../org/apache/hadoop/hdfs/StripeReader.java| 463 +
 .../hadoop/hdfs/util/StripedBlockUtil.java  | 158 ++---
 .../hadoop/hdfs/util/TestStripedBlockUtil.java  |   1 -
 10 files changed, 844 insertions(+), 678 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/734d54c1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
index c35d608..023f37f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
@@ -85,7 +85,7 @@ public final class ElasticByteBufferPool implements 
ByteBufferPool {
   private final TreeMap getBufferTree(boolean direct) {
 return direct ? directBuffers : buffers;
   }
-  
+
   @Override
   public synchronized ByteBuffer getBuffer(boolean direct, int length) {
 TreeMap tree = getBufferTree(direct);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/734d54c1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index cd7c6be..536715b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -29,6 +29,9 @@ public class ECChunk {
 
   private ByteBuffer chunkBuffer;
 
+  // TODO: should be in a more general flags
+  private boolean allZero = false;
+
   /**
* Wrapping a ByteBuffer
* @param buffer buffer to be wrapped by the chunk
@@ -37,6 +40,13 @@ public class ECChunk {
 this.chunkBuffer = buffer;
   }
 
+  public ECChunk(ByteBuffer buffer, int offset, int len) {
+ByteBuffer tmp = buffer.duplicate();
+tmp.position(offset);
+tmp.limit(offset + len);
+this.chunkBuffer = tmp.slice();
+  }
+
   /**
* Wrapping a bytes array
* @param buffer buffer to be wrapped by the chunk
@@ -45,6 +55,18 @@ public class ECChunk {
 this.chunkBuffer = ByteBuffer.wrap(buffer);
   }
 
+  public ECChunk(byte[] buffer, int offset, int len) {
+this.chunkBuffer = ByteBuffer.wrap(buffer, offset, len);
+  }
+
+  public boolean isAllZero() {
+return allZero;
+  }
+
+  public void setAllZero(boolean allZero) {
+this.allZero = allZero;
+  }
+
   /**
* Convert to ByteBuffer
* @return ByteBuffer

http://git-wip-us.apache.org/repos/asf/hadoop/blob/734d54c1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
index b22d44f..ef34639 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
@@ -115,6 +115,9 @@ final class CoderUtil {
 

[04/13] hadoop git commit: YARN-5655. TestContainerManagerSecurity#testNMTokens is asserting. Contributed by Robert Kanter

2016-09-21 Thread drankye
YARN-5655. TestContainerManagerSecurity#testNMTokens is asserting. Contributed 
by Robert Kanter


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c6d1d742
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c6d1d742
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c6d1d742

Branch: refs/heads/HADOOP-12756
Commit: c6d1d742e70e7b8f1d89cf9a4780657646e6a367
Parents: 734d54c
Author: Jason Lowe 
Authored: Tue Sep 20 14:15:06 2016 +
Committer: Jason Lowe 
Committed: Tue Sep 20 14:15:06 2016 +

--
 .../apache/hadoop/yarn/server/TestContainerManagerSecurity.java | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6d1d742/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
index ee3396d..408c1cc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java
@@ -68,6 +68,8 @@ import org.apache.hadoop.yarn.server.nodemanager.Context;
 import org.apache.hadoop.yarn.server.nodemanager.NodeManager;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl;
 import 
org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.MockRMApp;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager;
 import org.apache.hadoop.yarn.server.security.BaseNMTokenSecretManager;
@@ -205,6 +207,9 @@ public class TestContainerManagerSecurity extends 
KerberosSecurityTestcase {
 Resource r = Resource.newInstance(1024, 1);
 
 ApplicationId appId = ApplicationId.newInstance(1, 1);
+MockRMApp m = new MockRMApp(appId.getId(), appId.getClusterTimestamp(),
+RMAppState.NEW);
+yarnCluster.getResourceManager().getRMContext().getRMApps().put(appId, m);
 ApplicationAttemptId validAppAttemptId =
 ApplicationAttemptId.newInstance(appId, 1);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[13/13] hadoop git commit: HADOOP-13634. Some configuration in doc has been outdated. Contributed by Genmao Yu

2016-09-21 Thread drankye
HADOOP-13634. Some configuration in doc has been outdated. Contributed by 
Genmao Yu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26d5df39
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26d5df39
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26d5df39

Branch: refs/heads/HADOOP-12756
Commit: 26d5df390cf976dcc1d17fc68d0fed789dc34e84
Parents: 846c5ce
Author: Kai Zheng 
Authored: Fri Sep 23 08:44:28 2016 +0800
Committer: Kai Zheng 
Committed: Fri Sep 23 08:44:28 2016 +0800

--
 .../src/site/markdown/tools/hadoop-aliyun/index.md| 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26d5df39/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 4095e06..88c83b5 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -167,7 +167,7 @@ please raise your issues with them.
 
 
   fs.oss.paging.maximum
-  500
+  1000
   How many keys to request from Aliyun OSS when doing 
directory listings at a time.
   
 
@@ -196,11 +196,6 @@ please raise your issues with them.
 
 
 
-  fs.oss.buffer.dir
-  Comma separated list of directories to buffer OSS data 
before uploading to Aliyun OSS
-
-
-
   fs.oss.acl.default
   
   Set a canned ACL for bucket. Value may be private, 
public-read, public-read-write.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[03/13] hadoop git commit: YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed by Wangda Tan

2016-09-21 Thread drankye
YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed 
by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2b66d9ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2b66d9ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2b66d9ec

Branch: refs/heads/HADOOP-12756
Commit: 2b66d9ec5bdaec7e6b278926fbb6f222c4e3afaa
Parents: e52d6e7
Author: Jian He 
Authored: Tue Sep 20 15:03:07 2016 +0800
Committer: Jian He 
Committed: Tue Sep 20 15:03:31 2016 +0800

--
 .../dev-support/findbugs-exclude.xml|   10 +
 .../scheduler/capacity/AbstractCSQueue.java |  378 ++--
 .../scheduler/capacity/LeafQueue.java   | 1819 ++
 .../scheduler/capacity/ParentQueue.java |  825 
 .../scheduler/capacity/PlanQueue.java   |  122 +-
 .../scheduler/capacity/ReservationQueue.java|   67 +-
 .../capacity/TestContainerResizing.java |4 +-
 7 files changed, 1787 insertions(+), 1438 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a5c0f71..01b1da7 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -564,4 +564,14 @@
     
     
   
+
+  
+  
+
+    
+  
+  
+    
+
+  
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index 1d8f929..096f5ea 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -24,6 +24,7 @@ import java.util.HashSet;
 import java.util.Iterator;
 import java.util.Map;
 import java.util.Set;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
@@ -60,25 +61,25 @@ import com.google.common.collect.Sets;
 
 public abstract class AbstractCSQueue implements CSQueue {
   private static final Log LOG = LogFactory.getLog(AbstractCSQueue.class);  
-  CSQueue parent;
+  volatile CSQueue parent;
   final String queueName;
   volatile int numContainers;
   
   final Resource minimumAllocation;
   volatile Resource maximumAllocation;
-  QueueState state;
+  volatile QueueState state;
   final CSQueueMetrics metrics;
   protected final PrivilegedEntity queueEntity;
 
   final ResourceCalculator resourceCalculator;
   Set accessibleLabels;
-  RMNodeLabelsManager labelManager;
+  final RMNodeLabelsManager labelManager;
   String defaultLabelExpression;
   
   Map acls = 
   new HashMap();
   volatile boolean reservationsContinueLooking;
-  private boolean preemptionDisabled;
+  private volatile boolean preemptionDisabled;
 
   // Track resource usage-by-label like used-resource/pending-resource, etc.
   volatile ResourceUsage queueUsage;
@@ -94,6 +95,9 @@ public abstract class AbstractCSQueue implements CSQueue {
 
   protected ActivitiesManager activitiesManager;
 
+  protected ReentrantReadWriteLock.ReadLock readLock;
+  protected ReentrantReadWriteLock.WriteLock writeLock;
+
   public AbstractCSQueue(CapacitySchedulerContext cs,
   String queueName, CSQueue parent, CSQueue old) throws IOException {
 this.labelManager = cs.getRMContext().getNodeLabelManager();
@@ -116,7 +120,11 @@ public abstract class AbstractCSQueue implements CSQueue {
 queueEntity = new PrivilegedEntity(EntityType.QUEUE, getQueuePath());
 
 // initialize QueueCapacities
-

[06/13] hadoop git commit: YARN-5656. Fix ReservationACLsTestBase. (Sean Po via asuresh)

2016-09-21 Thread drankye
YARN-5656. Fix ReservationACLsTestBase. (Sean Po via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9f03b403
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9f03b403
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9f03b403

Branch: refs/heads/HADOOP-12756
Commit: 9f03b403ec69658fc57bc0f6b832da0e3c746497
Parents: e45307c
Author: Arun Suresh 
Authored: Tue Sep 20 12:27:17 2016 -0700
Committer: Arun Suresh 
Committed: Tue Sep 20 12:27:17 2016 -0700

--
 .../reservation/NoOverCommitPolicy.java | 12 -
 .../exceptions/MismatchedUserException.java | 46 
 .../ReservationACLsTestBase.java|  2 +
 .../reservation/TestNoOverCommitPolicy.java | 21 -
 4 files changed, 2 insertions(+), 79 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9f03b403/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
index 814d4b5..55f1d00 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
@@ -21,7 +21,6 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.reservation;
 import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.yarn.api.records.ReservationId;
-import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.MismatchedUserException;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.ResourceOverCommitException;
 
@@ -39,17 +38,6 @@ public class NoOverCommitPolicy implements SharingPolicy {
   public void validate(Plan plan, ReservationAllocation reservation)
   throws PlanningException {
 
-ReservationAllocation oldReservation =
-plan.getReservationById(reservation.getReservationId());
-
-// check updates are using same name
-if (oldReservation != null
-&& !oldReservation.getUser().equals(reservation.getUser())) {
-  throw new MismatchedUserException(
-  "Updating an existing reservation with mismatching user:"
-  + oldReservation.getUser() + " != " + reservation.getUser());
-}
-
 RLESparseResourceAllocation available = plan.getAvailableResourceOverTime(
 reservation.getUser(), reservation.getReservationId(),
 reservation.getStartTime(), reservation.getEndTime());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9f03b403/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
deleted file mode 100644
index 7b4419b..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
+++ /dev/null
@@ -1,46 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with 

[05/13] hadoop git commit: Addendum patch for fix javadocs failure which is caused by YARN-3141. (wangda)

2016-09-21 Thread drankye
Addendum patch for fix javadocs failure which is caused by YARN-3141. (wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e45307c9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e45307c9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e45307c9

Branch: refs/heads/HADOOP-12756
Commit: e45307c9a063248fcfb08281025d87c4abd343b1
Parents: c6d1d74
Author: Wangda Tan 
Authored: Tue Sep 20 11:21:01 2016 -0700
Committer: Wangda Tan 
Committed: Tue Sep 20 11:21:01 2016 -0700

--
 .../resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e45307c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
index f40ecd7..fd43e74 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
@@ -328,7 +328,7 @@ public class FiCaSchedulerApp extends 
SchedulerApplicationAttempt {
* of the resources that will be allocated to and preempted from this
* application.
*
-   * @param rc
+   * @param resourceCalculator
* @param clusterResource
* @param minimumAllocation
* @return an allocation


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[12/13] hadoop git commit: Merge branch 'trunk' into HADOOP-12756

2016-09-21 Thread drankye
Merge branch 'trunk' into HADOOP-12756


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/846c5ceb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/846c5ceb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/846c5ceb

Branch: refs/heads/HADOOP-12756
Commit: 846c5ceb3a929ad7b2dcea9bef07299af17bdc84
Parents: a49b3be 964e546
Author: Kai Zheng 
Authored: Fri Sep 23 08:42:28 2016 +0800
Committer: Kai Zheng 
Committed: Fri Sep 23 08:42:28 2016 +0800

--
 .../apache/hadoop/io/ElasticByteBufferPool.java |2 +-
 .../apache/hadoop/io/erasurecode/ECChunk.java   |   22 +
 .../io/erasurecode/rawcoder/CoderUtil.java  |3 +
 .../AbstractDelegationTokenSecretManager.java   |2 +-
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |   20 +-
 .../hadoop/hdfs/DFSStripedInputStream.java  |  654 +--
 .../hadoop/hdfs/PositionStripeReader.java   |  104 +
 .../hadoop/hdfs/StatefulStripeReader.java   |   95 +
 .../org/apache/hadoop/hdfs/StripeReader.java|  463 +
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  158 +-
 .../apache/hadoop/hdfs/TestEncryptionZones.java |   23 +-
 .../blockmanagement/TestBlockTokenWithDFS.java  |8 +-
 .../TestBlockTokenWithDFSStriped.java   |   23 +-
 .../hdfs/tools/TestDFSZKFailoverController.java |   18 +-
 .../hadoop/hdfs/util/TestStripedBlockUtil.java  |1 -
 .../dev-support/findbugs-exclude.xml|   10 +
 .../apache/hadoop/yarn/webapp/Dispatcher.java   |9 +
 .../org/apache/hadoop/yarn/webapp/WebApp.java   |4 +-
 .../hadoop/yarn/webapp/view/RobotsTextPage.java |   39 +
 .../apache/hadoop/yarn/webapp/TestWebApp.java   |   26 +
 .../reservation/NoOverCommitPolicy.java |   12 -
 .../exceptions/MismatchedUserException.java |   46 -
 .../scheduler/capacity/AbstractCSQueue.java |  378 ++--
 .../scheduler/capacity/LeafQueue.java   | 1819 ++
 .../scheduler/capacity/ParentQueue.java |  825 
 .../scheduler/capacity/PlanQueue.java   |  122 +-
 .../scheduler/capacity/ReservationQueue.java|   67 +-
 .../scheduler/common/fica/FiCaSchedulerApp.java |2 +-
 .../ReservationACLsTestBase.java|2 +
 .../reservation/TestNoOverCommitPolicy.java |   21 -
 .../capacity/TestContainerResizing.java |4 +-
 .../server/TestContainerManagerSecurity.java|5 +
 32 files changed, 2781 insertions(+), 2206 deletions(-)
--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/13] hadoop git commit: YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed by Wangda Tan

2016-09-21 Thread drankye
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-12756 a49b3be38 -> 26d5df390


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
index 3e9785f..ffb6892 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
@@ -107,68 +107,77 @@ public class ParentQueue extends AbstractCSQueue {
 ", fullname=" + getQueuePath());
   }
 
-  synchronized void setupQueueConfigs(Resource clusterResource)
+  void setupQueueConfigs(Resource clusterResource)
   throws IOException {
-super.setupQueueConfigs(clusterResource);
-StringBuilder aclsString = new StringBuilder();
-for (Map.Entry e : acls.entrySet()) {
-  aclsString.append(e.getKey() + ":" + e.getValue().getAclString());
-}
+try {
+  writeLock.lock();
+  super.setupQueueConfigs(clusterResource);
+  StringBuilder aclsString = new StringBuilder();
+  for (Map.Entry e : acls.entrySet()) {
+aclsString.append(e.getKey() + ":" + e.getValue().getAclString());
+  }
 
-StringBuilder labelStrBuilder = new StringBuilder(); 
-if (accessibleLabels != null) {
-  for (String s : accessibleLabels) {
-labelStrBuilder.append(s);
-labelStrBuilder.append(",");
+  StringBuilder labelStrBuilder = new StringBuilder();
+  if (accessibleLabels != null) {
+for (String s : accessibleLabels) {
+  labelStrBuilder.append(s);
+  labelStrBuilder.append(",");
+}
   }
-}
 
-LOG.info(queueName +
-", capacity=" + this.queueCapacities.getCapacity() +
-", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity() +
-", maxCapacity=" + this.queueCapacities.getMaximumCapacity() +
-", absoluteMaxCapacity=" + 
this.queueCapacities.getAbsoluteMaximumCapacity() +
-", state=" + state +
-", acls=" + aclsString + 
-", labels=" + labelStrBuilder.toString() + "\n" +
-", reservationsContinueLooking=" + reservationsContinueLooking);
+  LOG.info(queueName + ", capacity=" + this.queueCapacities.getCapacity()
+  + ", absoluteCapacity=" + this.queueCapacities.getAbsoluteCapacity()
+  + ", maxCapacity=" + this.queueCapacities.getMaximumCapacity()
+  + ", absoluteMaxCapacity=" + this.queueCapacities
+  .getAbsoluteMaximumCapacity() + ", state=" + state + ", acls="
+  + aclsString + ", labels=" + labelStrBuilder.toString() + "\n"
+  + ", reservationsContinueLooking=" + reservationsContinueLooking);
+} finally {
+  writeLock.unlock();
+}
   }
 
   private static float PRECISION = 0.0005f; // 0.05% precision
-  synchronized void setChildQueues(Collection childQueues) {
-// Validate
-float childCapacities = 0;
-for (CSQueue queue : childQueues) {
-  childCapacities += queue.getCapacity();
-}
-float delta = Math.abs(1.0f - childCapacities);  // crude way to check
-// allow capacities being set to 0, and enforce child 0 if parent is 0
-if (((queueCapacities.getCapacity() > 0) && (delta > PRECISION)) || 
-((queueCapacities.getCapacity() == 0) && (childCapacities > 0))) {
-  throw new IllegalArgumentException("Illegal" +
-   " capacity of " + childCapacities + 
-   " for children of queue " + queueName);
-}
-// check label capacities
-for (String nodeLabel : queueCapacities.getExistingNodeLabels()) {
-  float capacityByLabel = queueCapacities.getCapacity(nodeLabel);
-  // check children's labels
-  float sum = 0;
+
+  void setChildQueues(Collection childQueues) {
+try {
+  writeLock.lock();
+  // Validate
+  float childCapacities = 0;
   for (CSQueue queue : childQueues) {
-sum += queue.getQueueCapacities().getCapacity(nodeLabel);
+childCapacities += queue.getCapacity();
   }
-  if ((capacityByLabel > 0 && Math.abs(1.0f - sum) > PRECISION)
-  || (capacityByLabel == 0) && (sum > 0)) {
-

[02/13] hadoop git commit: YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. Contributed by Wangda Tan

2016-09-21 Thread drankye
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b66d9ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
index 922d711..6129772 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
@@ -20,6 +20,7 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
 
 import java.io.IOException;
 import java.util.*;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
@@ -85,11 +86,11 @@ public class LeafQueue extends AbstractCSQueue {
   private static final Log LOG = LogFactory.getLog(LeafQueue.class);
 
   private float absoluteUsedCapacity = 0.0f;
-  private int userLimit;
-  private float userLimitFactor;
+  private volatile int userLimit;
+  private volatile float userLimitFactor;
 
   protected int maxApplications;
-  protected int maxApplicationsPerUser;
+  protected volatile int maxApplicationsPerUser;
   
   private float maxAMResourcePerQueuePercent;
 
@@ -97,15 +98,15 @@ public class LeafQueue extends AbstractCSQueue {
   private volatile boolean rackLocalityFullReset;
 
   Map applicationAttemptMap =
-  new HashMap();
+  new ConcurrentHashMap<>();
 
   private Priority defaultAppPriorityPerQueue;
 
-  private OrderingPolicy pendingOrderingPolicy = null;
+  private final OrderingPolicy pendingOrderingPolicy;
 
   private volatile float minimumAllocationFactor;
 
-  private Map users = new HashMap();
+  private Map users = new ConcurrentHashMap<>();
 
   private final RecordFactory recordFactory = 
 RecordFactoryProvider.getRecordFactory(null);
@@ -122,7 +123,7 @@ public class LeafQueue extends AbstractCSQueue {
 
   private volatile ResourceLimits cachedResourceLimitsForHeadroom = null;
 
-  private OrderingPolicy orderingPolicy = null;
+  private volatile OrderingPolicy orderingPolicy = null;
 
   // Summation of consumed ratios for all users in queue
   private float totalUserConsumedRatio = 0;
@@ -131,7 +132,7 @@ public class LeafQueue extends AbstractCSQueue {
   // record all ignore partition exclusivityRMContainer, this will be used to 
do
   // preemption, key is the partition of the RMContainer allocated on
   private Map 
ignorePartitionExclusivityRMContainers =
-  new HashMap<>();
+  new ConcurrentHashMap<>();
 
   @SuppressWarnings({ "unchecked", "rawtypes" })
   public LeafQueue(CapacitySchedulerContext cs,
@@ -154,125 +155,125 @@ public class LeafQueue extends AbstractCSQueue {
 setupQueueConfigs(cs.getClusterResource());
   }
 
-  protected synchronized void setupQueueConfigs(Resource clusterResource)
+  protected void setupQueueConfigs(Resource clusterResource)
   throws IOException {
-super.setupQueueConfigs(clusterResource);
-
-this.lastClusterResource = clusterResource;
-
-this.cachedResourceLimitsForHeadroom = new ResourceLimits(clusterResource);
-
-// Initialize headroom info, also used for calculating application 
-// master resource limits.  Since this happens during queue initialization
-// and all queues may not be realized yet, we'll use (optimistic) 
-// absoluteMaxCapacity (it will be replaced with the more accurate 
-// absoluteMaxAvailCapacity during headroom/userlimit/allocation events)
-setQueueResourceLimitsInfo(clusterResource);
+try {
+  writeLock.lock();
+  super.setupQueueConfigs(clusterResource);
 
-CapacitySchedulerConfiguration conf = csContext.getConfiguration();
+  this.lastClusterResource = clusterResource;
 
-
setOrderingPolicy(conf.getOrderingPolicy(getQueuePath()));
+  this.cachedResourceLimitsForHeadroom = new ResourceLimits(
+  clusterResource);
 
-userLimit = conf.getUserLimit(getQueuePath());
-userLimitFactor = conf.getUserLimitFactor(getQueuePath());
+  // Initialize headroom info, also used for 

[10/13] hadoop git commit: HDFS-9333. Some tests using MiniDFSCluster errored complaining port in use. (iwasakims)

2016-09-21 Thread drankye
HDFS-9333. Some tests using MiniDFSCluster errored complaining port in use. 
(iwasakims)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/964e546a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/964e546a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/964e546a

Branch: refs/heads/HADOOP-12756
Commit: 964e546ab1dba5f5d53b209ec6c9a70a85654765
Parents: 5a58bfe
Author: Masatake Iwasaki 
Authored: Wed Sep 21 10:35:25 2016 +0900
Committer: Masatake Iwasaki 
Committed: Wed Sep 21 10:35:25 2016 +0900

--
 .../blockmanagement/TestBlockTokenWithDFS.java  |  8 ++-
 .../TestBlockTokenWithDFSStriped.java   | 23 +++-
 .../hdfs/tools/TestDFSZKFailoverController.java | 18 ++-
 3 files changed, 42 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/964e546a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
index e7e7739..9374ae8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
@@ -61,6 +61,7 @@ import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.log4j.Level;
@@ -349,7 +350,12 @@ public class TestBlockTokenWithDFS {
 Configuration conf = getConf(numDataNodes);
 
 try {
-  cluster = new 
MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
+  // prefer non-ephemeral port to avoid port collision on restartNameNode
+  cluster = new MiniDFSCluster.Builder(conf)
+  .nameNodePort(ServerSocketUtil.getPort(19820, 100))
+  .nameNodeHttpPort(ServerSocketUtil.getPort(19870, 100))
+  .numDataNodes(numDataNodes)
+  .build();
   cluster.waitActive();
   assertEquals(numDataNodes, cluster.getDataNodes().size());
   doTestRead(conf, cluster, false);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/964e546a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
index 64a48c2..1714561 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFSStriped.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
 import org.apache.hadoop.hdfs.server.balancer.TestBalancer;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.apache.hadoop.net.ServerSocketUtil;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.Timeout;
@@ -59,7 +60,27 @@ public class TestBlockTokenWithDFSStriped extends 
TestBlockTokenWithDFS {
   @Override
   public void testRead() throws Exception {
 conf = getConf();
-cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDNs).build();
+
+/*
+ * prefer non-ephemeral port to avoid conflict with tests using
+ * ephemeral ports on MiniDFSCluster#restartDataNode(true).
+ */
+Configuration[] overlays = new Configuration[numDNs];
+for (int i = 0; i < overlays.length; i++) {
+  int offset = i * 10;
+  Configuration c = new Configuration();
+  c.set(DFSConfigKeys.DFS_DATANODE_ADDRESS_KEY, "127.0.0.1:"
+  + ServerSocketUtil.getPort(19866 + offset, 100));
+  c.set(DFSConfigKeys.DFS_DATANODE_IPC_ADDRESS_KEY, "127.0.0.1:"
+  + ServerSocketUtil.getPort(19867 + offset, 100));
+  overlays[i] = c;
+}
+
+cluster = new 

[Hadoop Wiki] Update of "Books" by Packt Publishing

2016-09-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Books" page has been changed by Packt Publishing:
https://wiki.apache.org/hadoop/Books?action=diff=37=38

Comment:
Book added

  }}}
  
  
+ === Hadoop: Data Processing and Modelling ===
+ 
+ '''Name:'''  
[[https://www.packtpub.com/big-data-and-business-intelligence/hadoop-data-processing-and-modelling|Hadoop:
 Data Processing and Modelling]]
+ 
+ '''Authors:''' Garry Turkington, Tanmay Deshpande, Sandeep Karanth
+ 
+ '''Publisher:''' Packt
+ 
+ '''Date of Publishing:''' August 2016
+ 
+ Unlock the power of your data with Hadoop 2.X ecosystem and its data 
warehousing techniques across large data sets.
+ 
  === Hadoop Explained (Free eBook Download) ===
  '''Name:''' 
[[https://www.packtpub.com/packt/free-ebook/hadoop-explained|Hadoop Explained]]
  

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[Hadoop Wiki] Update of "Books" by Packt Publishing

2016-09-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Books" page has been changed by Packt Publishing:
https://wiki.apache.org/hadoop/Books?action=diff=36=37

Comment:
Added a free eBook

  }}}
  
  
+ === Hadoop Explained (Free eBook Download) ===
+ '''Name:''' 
[[https://www.packtpub.com/packt/free-ebook/hadoop-explained|Hadoop Explained]]
+ 
+ '''Author:''' Aravind Shenoy
+ 
+ '''Publisher:''' Packt Publishing
+ 
+ Learn how MapReduce organizes and processes large sets of data and discover 
the advantages of Hadoop - from scalability to security, see how Hadoop handles 
huge amounts of data with care
+ 
  === Hadoop Real-World Solutions Cookbook- Second Edition ===
  '''Name:''' 
[[https://www.packtpub.com/big-data-and-business-intelligence/hadoop-real-world-solutions-cookbook-second-edition|Hadoop
 Real-World Solutions Cookbook- Second Edition]]
  

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org