[2/4] hadoop git commit: HDFS-7263. Snapshot read can reveal future bytes for appended files. Contributed by Tao Luo. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7263. Snapshot read can reveal future bytes for appended files. 
Contributed by Tao Luo.
Moved CHANGES.txt entry to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa264114
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa264114
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa264114

Branch: refs/heads/trunk
Commit: fa2641143c0d74c4fef122d79f27791e15d3b43f
Parents: f2b4bc9
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:45:43 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:45:43 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa264114/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e4e2896..1507cbe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1819,9 +1819,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7301. TestMissingBlocksAlert should use MXBeans instead of old web UI.
 (Zhe Zhang via wheat9)
 
-HDFS-7263. Snapshot read can reveal future bytes for appended files.
-(Tao Luo via shv)
-
 HDFS-7315. DFSTestUtil.readFileBuffer opens extra FSDataInputStream.
 (Plamen Jeliazkov via wheat9)
 
@@ -2339,6 +2336,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7235. DataNode#transferBlock should report blocks that don't exist
 using reportBadBlock (yzhang via cmccabe)
 
+HDFS-7263. Snapshot read can reveal future bytes for appended files.
+(Tao Luo via shv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[4/4] hadoop git commit: HDFS-7263. Snapshot read can reveal future bytes for appended files. Contributed by Tao Luo. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7263. Snapshot read can reveal future bytes for appended files. 
Contributed by Tao Luo.
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit fa2641143c0d74c4fef122d79f27791e15d3b43f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b06d3427
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b06d3427
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b06d3427

Branch: refs/heads/branch-2.7
Commit: b06d342749db39ec274d925dafc0627e891a1bee
Parents: f40714f
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:45:43 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:46:58 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b06d3427/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index a851e70..d06c368 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -688,9 +688,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7301. TestMissingBlocksAlert should use MXBeans instead of old web UI.
 (Zhe Zhang via wheat9)
 
-HDFS-7263. Snapshot read can reveal future bytes for appended files.
-(Tao Luo via shv)
-
 HDFS-7315. DFSTestUtil.readFileBuffer opens extra FSDataInputStream.
 (Plamen Jeliazkov via wheat9)
 
@@ -1209,6 +1206,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7235. DataNode#transferBlock should report blocks that don't exist
 using reportBadBlock (yzhang via cmccabe)
 
+HDFS-7263. Snapshot read can reveal future bytes for appended files.
+(Tao Luo via shv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[3/4] hadoop git commit: HDFS-7263. Snapshot read can reveal future bytes for appended files. Contributed by Tao Luo. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7263. Snapshot read can reveal future bytes for appended files. 
Contributed by Tao Luo.
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit fa2641143c0d74c4fef122d79f27791e15d3b43f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9839abe1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9839abe1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9839abe1

Branch: refs/heads/branch-2
Commit: 9839abe1a8f0f92b9fa37cef398b6ac09eb9389b
Parents: c2a9c39
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:45:43 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:46:30 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9839abe1/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 869791c..7ae1b55 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1486,9 +1486,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7301. TestMissingBlocksAlert should use MXBeans instead of old web UI.
 (Zhe Zhang via wheat9)
 
-HDFS-7263. Snapshot read can reveal future bytes for appended files.
-(Tao Luo via shv)
-
 HDFS-7315. DFSTestUtil.readFileBuffer opens extra FSDataInputStream.
 (Plamen Jeliazkov via wheat9)
 
@@ -2014,6 +2011,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7235. DataNode#transferBlock should report blocks that don't exist
 using reportBadBlock (yzhang via cmccabe)
 
+HDFS-7263. Snapshot read can reveal future bytes for appended files.
+(Tao Luo via shv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[1/4] hadoop git commit: HDFS-7263. Snapshot read can reveal future bytes for appended files. Contributed by Tao Luo. (cherry picked from commit 8bfef590295372a48bd447b1462048008810ee17)

2015-08-14 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c2a9c3929 - 9839abe1a
  refs/heads/branch-2.6 33e559d75 - 27991b6fd
  refs/heads/branch-2.7 f40714f8d - b06d34274
  refs/heads/trunk f2b4bc9b6 - fa2641143


HDFS-7263. Snapshot read can reveal future bytes for appended files. 
Contributed by Tao Luo.
(cherry picked from commit 8bfef590295372a48bd447b1462048008810ee17)

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27991b6f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27991b6f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27991b6f

Branch: refs/heads/branch-2.6
Commit: 27991b6fdb38931f9ce0f2c8a615f9fd9da2a02f
Parents: 33e559d
Author: Tao Luo tao@wandisco.com
Authored: Wed Oct 29 20:20:11 2014 -0700
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:43:31 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  3 +-
 .../snapshot/TestSnapshotFileLength.java| 42 +++-
 3 files changed, 37 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27991b6f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 10fd981..bbe7dba 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -41,6 +41,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7235. DataNode#transferBlock should report blocks that don't exist
 using reportBadBlock (yzhang via cmccabe)
 
+HDFS-7263. Snapshot read can reveal future bytes for appended files.
+(Tao Luo via shv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27991b6f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index ff65ebc..db06d3b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -801,7 +801,8 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   }
   int realLen = (int) Math.min(len, (blockEnd - pos + 1L));
   if (locatedBlocks.isLastBlockComplete()) {
-realLen = (int) Math.min(realLen, locatedBlocks.getFileLength());
+realLen = (int) Math.min(realLen,
+locatedBlocks.getFileLength() - pos);
   }
   int result = readBuffer(strategy, off, realLen, corruptedBlockMap);
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27991b6f/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
index 32534f0..98aafc1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
@@ -21,7 +21,10 @@ import java.io.ByteArrayOutputStream;
 import java.io.PrintStream;
 
 
+import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -55,6 +58,8 @@ public class TestSnapshotFileLength {
 
   @Before
   public void setUp() throws Exception {
+conf.setLong(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, BLOCKSIZE);
+conf.setInt(DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, BLOCKSIZE);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION)
   .build();
 cluster.waitActive();
@@ -81,40 +86,57 @@ public class TestSnapshotFileLength {
 
 int bytesRead;
 byte[] buffer = new byte[BLOCKSIZE * 8];
+int origLen = BLOCKSIZE + 1;
+int toAppend = BLOCKSIZE;

[1/5] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.

2015-08-14 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9839abe1a - e0b1744de
  refs/heads/branch-2.6 27991b6fd - d6050f06a
  refs/heads/branch-2.7 b06d34274 - a046f7e57
  refs/heads/trunk fa2641143 - 24a11e399


HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.

(cherry picked from commit 9e63cb4492896ffb78c84e27f263a61ca12148c8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7adb6f95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7adb6f95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7adb6f95

Branch: refs/heads/branch-2.6
Commit: 7adb6f9501e8efaa712fb70fe5a97e233622e3e1
Parents: 27991b6
Author: Haohui Mai whe...@apache.org
Authored: Sun Nov 9 17:48:26 2014 -0800
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:07:16 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  2 +
 .../hadoop/security/UserGroupInformation.java   | 38 ++--
 .../hadoop/security/TestUGILoginFromKeytab.java | 91 
 3 files changed, 126 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7adb6f95/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 583f6ae..3a87612 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -329,6 +329,8 @@ Release 2.6.0 - 2014-11-18
 
 HADOOP-11247. Fix a couple javac warnings in NFS. (Brandon Li via wheat9)
 
+HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
+
   BUG FIXES
 
 HADOOP-11182. GraphiteSink emits wrong timestamps (Sascha Coenen via 
raviprak)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7adb6f95/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index fbefdb1..7fb036c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -86,9 +86,21 @@ public class UserGroupInformation {
* Percentage of the ticket window to use before we renew ticket.
*/
   private static final float TICKET_RENEW_WINDOW = 0.80f;
+  private static boolean shouldRenewImmediatelyForTests = false;
   static final String HADOOP_USER_NAME = HADOOP_USER_NAME;
   static final String HADOOP_PROXY_USER = HADOOP_PROXY_USER;
-  
+
+  /**
+   * For the purposes of unit tests, we want to test login
+   * from keytab and don't want to wait until the renew
+   * window (controlled by TICKET_RENEW_WINDOW).
+   * @param immediate true if we should login without waiting for ticket window
+   */
+  @VisibleForTesting
+  static void setShouldRenewImmediatelyForTests(boolean immediate) {
+shouldRenewImmediatelyForTests = immediate;
+  }
+
   /** 
* UgiMetrics maintains UGI activity statistics
* and publishes them through the metrics interfaces.
@@ -586,6 +598,20 @@ public class UserGroupInformation {
 user.setLogin(login);
   }
 
+  private static Class? KEY_TAB_CLASS = KerberosKey.class;
+  static {
+try {
+  // We use KEY_TAB_CLASS to determine if the UGI is logged in from
+  // keytab. In JDK6 and JDK7, if useKeyTab and storeKey are specified
+  // in the Krb5LoginModule, then some number of KerberosKey objects
+  // are added to the Subject's private credentials. However, in JDK8,
+  // a KeyTab object is added instead. More details in HADOOP-10786.
+  KEY_TAB_CLASS = Class.forName(javax.security.auth.kerberos.KeyTab);
+} catch (ClassNotFoundException cnfe) {
+  // Ignore. javax.security.auth.kerberos.KeyTab does not exist in JDK6.
+}
+  }
+
   /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
@@ -594,7 +620,7 @@ public class UserGroupInformation {
   UserGroupInformation(Subject subject) {
 this.subject = subject;
 this.user = subject.getPrincipals(User.class).iterator().next();
-this.isKeytab = 
!subject.getPrivateCredentials(KerberosKey.class).isEmpty();
+this.isKeytab = !subject.getPrivateCredentials(KEY_TAB_CLASS).isEmpty();
 this.isKrbTkt = 
!subject.getPrivateCredentials(KerberosTicket.class).isEmpty();
   }
   
@@ 

[3/3] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit e7aa81394dce61cc96d480e21204263a5f2ed153)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8f4a09b6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8f4a09b6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8f4a09b6

Branch: refs/heads/branch-2.7
Commit: 8f4a09b6076de9fbd6cd8ccaddf72ba9c94429ff
Parents: a046f7e
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:23:51 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:24:59 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8f4a09b6/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 78f5c15..8fc969f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -139,8 +139,6 @@ Release 2.7.0 - 2015-04-20
 
 HADOOP-10563. Remove the dependency of jsp in trunk. (wheat9)
 
-HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
-
 HADOOP-11291. Log the cause of SASL connection failures.
 (Stephen Chu via cnauroth)
 



[1/3] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e0b1744de - a6ec5c3de
  refs/heads/branch-2.7 a046f7e57 - 8f4a09b60
  refs/heads/trunk 24a11e399 - e7aa81394


HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.
Moved CHANGES.txt entry to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e7aa8139
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e7aa8139
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e7aa8139

Branch: refs/heads/trunk
Commit: e7aa81394dce61cc96d480e21204263a5f2ed153
Parents: 24a11e3
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:23:51 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:23:51 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e7aa8139/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index c84af6a..6e48c20 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1199,8 +1199,6 @@ Release 2.7.0 - 2015-04-20
 
 HADOOP-10563. Remove the dependency of jsp in trunk. (wheat9)
 
-HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
-
 HADOOP-11291. Log the cause of SASL connection failures.
 (Stephen Chu via cnauroth)
 



[1/4] hadoop git commit: HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is full (zhaoyunjiong via cmccabe) (cherry picked from commit 86e3993def01223f92b8d1dd35f6c1f8ab60

2015-08-14 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 15760a148 - 796b94df1
  refs/heads/branch-2.6 3c9c2b404 - 597521fcf
  refs/heads/branch-2.7 90f364172 - d8d33055b
  refs/heads/trunk 08bd4edf4 - 05ed69058


HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is 
full (zhaoyunjiong via cmccabe)
(cherry picked from commit 86e3993def01223f92b8d1dd35f6c1f8ab6033f5)

(cherry picked from commit f6d1bf5ed1cf647d82e676df15587de42b1faa42)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/597521fc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/597521fc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/597521fc

Branch: refs/heads/branch-2.6
Commit: 597521fcf8f678c27b7b5c2b11bb855695d60413
Parents: 3c9c2b4
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Dec 1 11:42:10 2014 -0800
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:49:52 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt  |  3 +++
 .../apache/hadoop/net/unix/DomainSocketWatcher.java  | 15 +++
 2 files changed, 18 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/597521fc/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 19bc188..c790af5 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -18,6 +18,9 @@ Release 2.6.1 - UNRELEASED
 
 HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
 
+HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification
+pipe is full (zhaoyunjiong via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/597521fc/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java
index 95ef30d..0172f6b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java
@@ -103,6 +103,7 @@ public final class DomainSocketWatcher implements Closeable 
{
 public boolean handle(DomainSocket sock) {
   assert(lock.isHeldByCurrentThread());
   try {
+kicked = false;
 if (LOG.isTraceEnabled()) {
   LOG.trace(this + : NotificationHandler: doing a read on  +
 sock.fd);
@@ -228,6 +229,14 @@ public final class DomainSocketWatcher implements 
Closeable {
* Whether or not this DomainSocketWatcher is closed.
*/
   private boolean closed = false;
+  
+  /**
+   * True if we have written a byte to the notification socket. We should not
+   * write anything else to the socket until the notification handler has had a
+   * chance to run. Otherwise, our thread might block, causing deadlock. 
+   * See HADOOP-11333 for details.
+   */
+  private boolean kicked = false;
 
   public DomainSocketWatcher(int interruptCheckPeriodMs) throws IOException {
 if (loadingFailureReason != null) {
@@ -348,8 +357,14 @@ public final class DomainSocketWatcher implements 
Closeable {
*/
   private void kick() {
 assert(lock.isHeldByCurrentThread());
+
+if (kicked) {
+  return;
+}
+
 try {
   notificationSockets[0].getOutputStream().write(0);
+  kicked = true;
 } catch (IOException e) {
   if (!closed) {
 LOG.error(this + : error writing to notificationSockets[0], e);



[2/4] hadoop git commit: HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is full (zhaoyunjiong via cmccabe) Moved to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is 
full (zhaoyunjiong via cmccabe)
Moved to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/05ed6905
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/05ed6905
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/05ed6905

Branch: refs/heads/trunk
Commit: 05ed69058f22ebeccc58faf0be491c269e950526
Parents: 08bd4ed
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:53:46 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:53:46 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/05ed6905/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 6e48c20..57ef1c5 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1499,9 +1499,6 @@ Release 2.7.0 - 2015-04-20
 HADOOP-11300. KMS startup scripts must not display the keystore /
 truststore passwords. (Arun Suresh via wang)
 
-HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification
-pipe is full (zhaoyunjiong via cmccabe)
-
 HADOOP-11337. KeyAuthorizationKeyProvider access checks need to be done
 atomically. (Dian Fu via wang)
 
@@ -1885,6 +1882,9 @@ Release 2.6.1 - UNRELEASED
 
 HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
 
+HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification
+pipe is full (zhaoyunjiong via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[2/3] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit e7aa81394dce61cc96d480e21204263a5f2ed153)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a6ec5c3d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a6ec5c3d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a6ec5c3d

Branch: refs/heads/branch-2
Commit: a6ec5c3dec3d83877124e84be001352513088056
Parents: e0b1744
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:23:51 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:24:29 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6ec5c3d/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 4b1abea..05c5a56 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -708,8 +708,6 @@ Release 2.7.0 - 2015-04-20
 
 HADOOP-10563. Remove the dependency of jsp in trunk. (wheat9)
 
-HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
-
 HADOOP-11291. Log the cause of SASL connection failures.
 (Stephen Chu via cnauroth)
 



[4/4] hadoop git commit: HDFS-7225. Remove stale block invalidation work when DN re-registers with different UUID. (Zhe Zhang and Andrew Wang) Moved to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7225. Remove stale block invalidation work when DN re-registers with 
different UUID. (Zhe Zhang and Andrew Wang)
Moved to 2.6.1

(cherry picked from commit 08bd4edf4092901273da0d73a5cc760fdc11052b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/90f36417
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/90f36417
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/90f36417

Branch: refs/heads/branch-2.7
Commit: 90f3641728a29b7ddb41b020427da8354b2b7d99
Parents: 8f4a09b
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:38:00 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:38:59 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/90f36417/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d06c368..fca9f75 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -717,9 +717,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7406. SimpleHttpProxyHandler puts incorrect Connection: Close
 header. (wheat9)
 
-HDFS-7225. Remove stale block invalidation work when DN re-registers with
-different UUID. (Zhe Zhang and Andrew Wang)
-
 HDFS-7374. Allow decommissioning of dead DataNodes. (Zhe Zhang)
 
 HDFS-7403. Inaccurate javadoc of BlockUCState#COMPLETE state. (
@@ -1209,6 +1206,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
+HDFS-7225. Remove stale block invalidation work when DN re-registers with
+different UUID. (Zhe Zhang and Andrew Wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[4/5] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit d6050f06a3b7e049541b1cb4597c388abf00a5be)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e0b1744d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e0b1744d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e0b1744d

Branch: refs/heads/branch-2
Commit: e0b1744de9f6ca49458f90012de632f1ff8dda0d
Parents: 9839abe
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:09:10 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:10:22 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0b1744d/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 1f30b5b..4b1abea 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1407,6 +1407,8 @@ Release 2.6.1 - UNRELEASED
 architecture because it is slower there (Suman Somasundar via Colin P.
 McCabe)
 
+HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[5/5] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit d6050f06a3b7e049541b1cb4597c388abf00a5be)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a046f7e5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a046f7e5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a046f7e5

Branch: refs/heads/branch-2.7
Commit: a046f7e5703a9884a3011d74af714de757324b9d
Parents: b06d342
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:09:10 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:10:58 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a046f7e5/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index f3f2d41..78f5c15 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -835,6 +835,8 @@ Release 2.6.1 - UNRELEASED
 architecture because it is slower there (Suman Somasundar via Colin P.
 McCabe)
 
+HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[3/5] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit d6050f06a3b7e049541b1cb4597c388abf00a5be)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/24a11e39
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/24a11e39
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/24a11e39

Branch: refs/heads/trunk
Commit: 24a11e39960696d75e58df912ec6aa7283be194d
Parents: fa26411
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:09:10 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:09:56 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24a11e39/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e458042..c84af6a 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1885,6 +1885,8 @@ Release 2.6.1 - UNRELEASED
 architecture because it is slower there (Suman Somasundar via Colin P.
 McCabe)
 
+HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[2/5] hadoop git commit: HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu.
Moved CHANGES.txt entry to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d6050f06
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d6050f06
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d6050f06

Branch: refs/heads/branch-2.6
Commit: d6050f06a3b7e049541b1cb4597c388abf00a5be
Parents: 7adb6f9
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:09:10 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:09:10 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d6050f06/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3a87612..19bc188 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -16,6 +16,8 @@ Release 2.6.1 - UNRELEASED
 architecture because it is slower there (Suman Somasundar via Colin P.
 McCabe)
 
+HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES
@@ -329,8 +331,6 @@ Release 2.6.0 - 2014-11-18
 
 HADOOP-11247. Fix a couple javac warnings in NFS. (Brandon Li via wheat9)
 
-HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
-
   BUG FIXES
 
 HADOOP-11182. GraphiteSink emits wrong timestamps (Sascha Coenen via 
raviprak)



[1/4] hadoop git commit: HDFS-7225. Remove stale block invalidation work when DN re-registers with different UUID. (Zhe Zhang and Andrew Wang)

2015-08-14 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a6ec5c3de - 15760a148
  refs/heads/branch-2.6 d6050f06a - 3c9c2b404
  refs/heads/branch-2.7 8f4a09b60 - 90f364172
  refs/heads/trunk e7aa81394 - 08bd4edf4


HDFS-7225. Remove stale block invalidation work when DN re-registers with 
different UUID. (Zhe Zhang and Andrew Wang)

(cherry picked from commit 406c09ad1150c4971c2b7675fcb0263d40517fbf)
(cherry picked from commit 2e15754a92c6589308ccbbb646166353cc2f2456)

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c9c2b40
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c9c2b40
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c9c2b40

Branch: refs/heads/branch-2.6
Commit: 3c9c2b404f4022349df434c0ec66b172404fbe0e
Parents: d6050f0
Author: Andrew Wang w...@apache.org
Authored: Tue Nov 18 22:14:04 2014 -0800
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:36:00 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/blockmanagement/BlockManager.java|  21 ++-
 .../server/blockmanagement/DatanodeManager.java |   2 +
 .../TestComputeInvalidateWork.java  | 167 +++
 4 files changed, 156 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c9c2b40/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index bbe7dba..ca1d89f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -44,6 +44,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
+HDFS-7225. Remove stale block invalidation work when DN re-registers with
+different UUID. (Zhe Zhang and Andrew Wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c9c2b40/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 17112bf..d26cc52 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1112,6 +1112,18 @@ public class BlockManager {
   }
 
   /**
+   * Remove all block invalidation tasks under this datanode UUID;
+   * used when a datanode registers with a new UUID and the old one
+   * is wiped.
+   */
+  void removeFromInvalidates(final DatanodeInfo datanode) {
+if (!namesystem.isPopulatingReplQueues()) {
+  return;
+}
+invalidateBlocks.remove(datanode);
+  }
+
+  /**
* Mark the block belonging to datanode as corrupt
* @param blk Block to be marked as corrupt
* @param dn Datanode which holds the corrupt replica
@@ -3395,7 +3407,14 @@ public class BlockManager {
 return 0;
   }
   try {
-toInvalidate = 
invalidateBlocks.invalidateWork(datanodeManager.getDatanode(dn));
+DatanodeDescriptor dnDescriptor = datanodeManager.getDatanode(dn);
+if (dnDescriptor == null) {
+  LOG.warn(DataNode  + dn +  cannot be found with UUID  +
+  dn.getDatanodeUuid() + , removing block invalidation work.);
+  invalidateBlocks.remove(dn);
+  return 0;
+}
+toInvalidate = invalidateBlocks.invalidateWork(dnDescriptor);
 
 if (toInvalidate == null) {
   return 0;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c9c2b40/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 6a52349..80965b9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -593,6 +593,8 @@ public 

[2/4] hadoop git commit: HDFS-7225. Remove stale block invalidation work when DN re-registers with different UUID. (Zhe Zhang and Andrew Wang) Moved to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7225. Remove stale block invalidation work when DN re-registers with 
different UUID. (Zhe Zhang and Andrew Wang)
Moved to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/08bd4edf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/08bd4edf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/08bd4edf

Branch: refs/heads/trunk
Commit: 08bd4edf4092901273da0d73a5cc760fdc11052b
Parents: e7aa813
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:38:00 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:38:00 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08bd4edf/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1507cbe..dba4535 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1848,9 +1848,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7406. SimpleHttpProxyHandler puts incorrect Connection: Close
 header. (wheat9)
 
-HDFS-7225. Remove stale block invalidation work when DN re-registers with
-different UUID. (Zhe Zhang and Andrew Wang)
-
 HDFS-7374. Allow decommissioning of dead DataNodes. (Zhe Zhang)
 
 HDFS-7403. Inaccurate javadoc of BlockUCState#COMPLETE state. (
@@ -2339,6 +2336,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
+HDFS-7225. Remove stale block invalidation work when DN re-registers with
+different UUID. (Zhe Zhang and Andrew Wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[3/4] hadoop git commit: HDFS-7225. Remove stale block invalidation work when DN re-registers with different UUID. (Zhe Zhang and Andrew Wang) Moved to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7225. Remove stale block invalidation work when DN re-registers with 
different UUID. (Zhe Zhang and Andrew Wang)
Moved to 2.6.1

(cherry picked from commit 08bd4edf4092901273da0d73a5cc760fdc11052b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15760a14
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15760a14
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15760a14

Branch: refs/heads/branch-2
Commit: 15760a148bf3bb1af08b7f07ab22ab03ddd257b4
Parents: a6ec5c3
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:38:00 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:38:23 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15760a14/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7ae1b55..2fe056d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1515,9 +1515,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7406. SimpleHttpProxyHandler puts incorrect Connection: Close
 header. (wheat9)
 
-HDFS-7225. Remove stale block invalidation work when DN re-registers with
-different UUID. (Zhe Zhang and Andrew Wang)
-
 HDFS-7374. Allow decommissioning of dead DataNodes. (Zhe Zhang)
 
 HDFS-7403. Inaccurate javadoc of BlockUCState#COMPLETE state. (
@@ -2014,6 +2011,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
+HDFS-7225. Remove stale block invalidation work when DN re-registers with
+different UUID. (Zhe Zhang and Andrew Wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[20/43] hadoop git commit: HDFS-7885. Datanode should not trust the generation stamp provided by client. Contributed by Tsz Wo Nicholas Sze.

2015-08-14 Thread sjlee
HDFS-7885. Datanode should not trust the generation stamp provided by client. 
Contributed by Tsz Wo Nicholas Sze.

(cherry picked from commit 24db0812be64e83a48ade01fc1eaaeaedad4dec0)
(cherry picked from commit 994dadb9ba0a3b87b6548e6e0801eadd26554d55)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0bc5c649
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0bc5c649
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0bc5c649

Branch: refs/heads/sjlee/hdfs-merge
Commit: 0bc5c6495a7feb4365af0ce5fe48fc87b7e1749f
Parents: e1af1ac
Author: Jing Zhao ji...@apache.org
Authored: Fri Mar 6 10:55:56 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 23:32:45 2015 -0700

--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 15 +
 .../hadoop/hdfs/TestBlockReaderLocalLegacy.java | 63 
 2 files changed, 78 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bc5c649/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 0d9f096..0c2337e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -2276,6 +2276,21 @@ class FsDatasetImpl implements 
FsDatasetSpiFsVolumeImpl {
   @Override // FsDatasetSpi
   public BlockLocalPathInfo getBlockLocalPathInfo(ExtendedBlock block)
   throws IOException {
+synchronized(this) {
+  final Replica replica = volumeMap.get(block.getBlockPoolId(),
+  block.getBlockId());
+  if (replica == null) {
+throw new ReplicaNotFoundException(block);
+  }
+  if (replica.getGenerationStamp()  block.getGenerationStamp()) {
+throw new IOException(
+Replica generation stamp  block generation stamp, block=
++ block + , replica= + replica);
+  } else if (replica.getGenerationStamp()  block.getGenerationStamp()) {
+block.setGenerationStamp(replica.getGenerationStamp());
+  }
+}
+
 File datafile = getBlockFile(block);
 File metafile = FsDatasetUtil.getMetaFile(datafile, 
block.getGenerationStamp());
 BlockLocalPathInfo info = new BlockLocalPathInfo(block,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bc5c649/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
index cb50539..1c4134f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
@@ -30,11 +30,16 @@ import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.protocol.BlockLocalPathInfo;
+import org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.net.unix.DomainSocket;
 import org.apache.hadoop.net.unix.TemporarySocketDirectory;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.Token;
 import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.BeforeClass;
@@ -153,4 +158,62 @@ public class TestBlockReaderLocalLegacy {
 Arrays.equals(orig, buf);
 cluster.shutdown();
   }
+
+  @Test(timeout=2)
+  public void testBlockReaderLocalLegacyWithAppend() throws Exception {
+final short REPL_FACTOR = 1;
+final HdfsConfiguration conf = getConfiguration(null);
+conf.setBoolean(DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL, 
true);
+
+final MiniDFSCluster cluster =
+new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+cluster.waitActive();
+
+final DistributedFileSystem dfs = 

[03/43] hadoop git commit: HDFS-7263. Snapshot read can reveal future bytes for appended files. Contributed by Tao Luo. (cherry picked from commit 8bfef590295372a48bd447b1462048008810ee17)

2015-08-14 Thread sjlee
HDFS-7263. Snapshot read can reveal future bytes for appended files. 
Contributed by Tao Luo.
(cherry picked from commit 8bfef590295372a48bd447b1462048008810ee17)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3827a1ac
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3827a1ac
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3827a1ac

Branch: refs/heads/sjlee/hdfs-merge
Commit: 3827a1acdbc4f9fec3179dcafa614734b5fa31bc
Parents: 1aa9e34
Author: Tao Luo tao@wandisco.com
Authored: Wed Oct 29 20:20:11 2014 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 21:25:12 2015 -0700

--
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  3 +-
 .../snapshot/TestSnapshotFileLength.java| 42 +++-
 2 files changed, 34 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3827a1ac/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index ff65ebc..db06d3b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -801,7 +801,8 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   }
   int realLen = (int) Math.min(len, (blockEnd - pos + 1L));
   if (locatedBlocks.isLastBlockComplete()) {
-realLen = (int) Math.min(realLen, locatedBlocks.getFileLength());
+realLen = (int) Math.min(realLen,
+locatedBlocks.getFileLength() - pos);
   }
   int result = readBuffer(strategy, off, realLen, corruptedBlockMap);
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3827a1ac/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
index 32534f0..98aafc1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotFileLength.java
@@ -21,7 +21,10 @@ import java.io.ByteArrayOutputStream;
 import java.io.PrintStream;
 
 
+import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -55,6 +58,8 @@ public class TestSnapshotFileLength {
 
   @Before
   public void setUp() throws Exception {
+conf.setLong(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, BLOCKSIZE);
+conf.setInt(DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, BLOCKSIZE);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION)
   .build();
 cluster.waitActive();
@@ -81,40 +86,57 @@ public class TestSnapshotFileLength {
 
 int bytesRead;
 byte[] buffer = new byte[BLOCKSIZE * 8];
+int origLen = BLOCKSIZE + 1;
+int toAppend = BLOCKSIZE;
 FSDataInputStream fis = null;
 FileStatus fileStatus = null;
 
 // Create and write a file.
 Path file1 = new Path(sub, file1Name);
-DFSTestUtil.createFile(hdfs, file1, 0, REPLICATION, SEED);
-DFSTestUtil.appendFile(hdfs, file1, BLOCKSIZE);
+DFSTestUtil.createFile(hdfs, file1, BLOCKSIZE, 0, BLOCKSIZE, REPLICATION, 
SEED);
+DFSTestUtil.appendFile(hdfs, file1, origLen);
 
 // Create a snapshot on the parent directory.
 hdfs.allowSnapshot(sub);
 hdfs.createSnapshot(sub, snapshot1);
 
-// Write more data to the file.
-DFSTestUtil.appendFile(hdfs, file1, BLOCKSIZE);
+Path file1snap1
+= SnapshotTestHelper.getSnapshotPath(sub, snapshot1, file1Name);
+
+// Append to the file.
+FSDataOutputStream out = hdfs.append(file1);
+try {
+  AppendTestUtil.write(out, 0, toAppend);
+  // Test reading from snapshot of file that is open for append
+  byte[] dataFromSnapshot = DFSTestUtil.readFileBuffer(hdfs, file1snap1);
+  assertThat(Wrong data size in snapshot.,
+  

[21/43] hadoop git commit: HDFS-7610. Fix removal of dynamically added DN volumes (Lei (Eddy) Xu via Colin P. McCabe)

2015-08-14 Thread sjlee
HDFS-7610. Fix removal of dynamically added DN volumes (Lei (Eddy) Xu via Colin 
P. McCabe)

(cherry picked from commit a17584936cc5141e3f5612ac3ecf35e27968e439)
(cherry picked from commit 7779f38e68ca4e0f7ac08eb7e5f4801b89979d02)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/65ae3e2f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/65ae3e2f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/65ae3e2f

Branch: refs/heads/sjlee/hdfs-merge
Commit: 65ae3e2ff16ce1114a0115ff916837b0173b77f1
Parents: 0bc5c64
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Jan 20 20:11:09 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 23:59:56 2015 -0700

--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 16 +
 .../datanode/fsdataset/impl/FsVolumeList.java   |  8 +++--
 .../fsdataset/impl/TestFsDatasetImpl.java   | 37 ++--
 3 files changed, 49 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/65ae3e2f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 0c2337e..cbcf6b8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -336,7 +336,7 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 
 StorageType storageType = location.getStorageType();
 final FsVolumeImpl fsVolume = new FsVolumeImpl(
-this, sd.getStorageUuid(), dir, this.conf, storageType);
+this, sd.getStorageUuid(), sd.getCurrentDir(), this.conf, storageType);
 final ReplicaMap tempVolumeMap = new ReplicaMap(fsVolume);
 ArrayListIOException exceptions = Lists.newArrayList();
 
@@ -379,19 +379,19 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl 
{
*/
   @Override
   public synchronized void removeVolumes(CollectionStorageLocation volumes) {
-SetFile volumeSet = new HashSetFile();
+SetString volumeSet = new HashSetString();
 for (StorageLocation sl : volumes) {
-  volumeSet.add(sl.getFile());
+  volumeSet.add(sl.getFile().getAbsolutePath());
 }
 for (int idx = 0; idx  dataStorage.getNumStorageDirs(); idx++) {
   Storage.StorageDirectory sd = dataStorage.getStorageDir(idx);
-  if (volumeSet.contains(sd.getRoot())) {
-String volume = sd.getRoot().toString();
+  String volume = sd.getRoot().getAbsolutePath();
+  if (volumeSet.contains(volume)) {
 LOG.info(Removing  + volume +  from FsDataset.);
 
 // Disable the volume from the service.
 asyncDiskService.removeVolume(sd.getCurrentDir());
-this.volumes.removeVolume(volume);
+this.volumes.removeVolume(sd.getRoot());
 
 // Removed all replica information for the blocks on the volume. Unlike
 // updating the volumeMap in addVolume(), this operation does not scan
@@ -401,7 +401,9 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
   for (IteratorReplicaInfo it = volumeMap.replicas(bpid).iterator();
   it.hasNext(); ) {
 ReplicaInfo block = it.next();
-if (block.getVolume().getBasePath().equals(volume)) {
+String absBasePath =
+  new File(block.getVolume().getBasePath()).getAbsolutePath();
+if (absBasePath.equals(volume)) {
   invalidate(bpid, block);
   blocks.add(block);
   it.remove();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/65ae3e2f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
index 9483444..b17b90b 100644
--- 

[13/43] hadoop git commit: HDFS-7575. Upgrade should generate a unique storage ID for each volume. (Contributed by Arpit Agarwal)

2015-08-14 Thread sjlee
HDFS-7575. Upgrade should generate a unique storage ID for each volume. 
(Contributed by Arpit Agarwal)

(cherry picked from commit 1d9d166c0beb56aa45e65f779044905acff25d88)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca8e1b07
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca8e1b07
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca8e1b07

Branch: refs/heads/sjlee/hdfs-merge
Commit: ca8e1b0739b6653833f9bc8990ab126420703f66
Parents: e9a2825
Author: Arpit Agarwal a...@apache.org
Authored: Thu Jan 22 14:08:20 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 22:23:17 2015 -0700

--
 .../hdfs/server/datanode/DataStorage.java   |  35 +++--
 .../hdfs/server/protocol/DatanodeStorage.java   |  19 ++-
 .../hadoop/hdfs/TestDFSUpgradeFromImage.java|  19 ++-
 .../hadoop/hdfs/TestDatanodeLayoutUpgrade.java  |   2 +-
 ...estDatanodeStartupFixesLegacyStorageIDs.java | 139 +++
 .../apache/hadoop/hdfs/UpgradeUtilities.java|   2 +-
 .../server/datanode/SimulatedFSDataset.java |   2 +-
 .../fsdataset/impl/TestFsDatasetImpl.java   |   2 +-
 .../testUpgradeFrom22FixesStorageIDs.tgz| Bin 0 - 3260 bytes
 .../testUpgradeFrom22FixesStorageIDs.txt|  25 
 .../testUpgradeFrom22via26FixesStorageIDs.tgz   | Bin 0 - 3635 bytes
 .../testUpgradeFrom22via26FixesStorageIDs.txt   |  25 
 .../testUpgradeFrom26PreservesStorageIDs.tgz| Bin 0 - 3852 bytes
 .../testUpgradeFrom26PreservesStorageIDs.txt|  25 
 14 files changed, 274 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca8e1b07/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
index 8863724..fc4a682 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
@@ -37,6 +37,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.LayoutVersion;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
 import org.apache.hadoop.hdfs.server.common.InconsistentFSStateException;
@@ -142,11 +143,20 @@ public class DataStorage extends Storage {
 this.datanodeUuid = newDatanodeUuid;
   }
 
-  /** Create an ID for this storage. */
-  public synchronized void createStorageID(StorageDirectory sd) {
-if (sd.getStorageUuid() == null) {
+  /** Create an ID for this storage.
+   * @return true if a new storage ID was generated.
+   * */
+  public synchronized boolean createStorageID(
+  StorageDirectory sd, boolean regenerateStorageIds) {
+final String oldStorageID = sd.getStorageUuid();
+if (oldStorageID == null || regenerateStorageIds) {
   sd.setStorageUuid(DatanodeStorage.generateUuid());
+  LOG.info(Generated new storageID  + sd.getStorageUuid() +
+   for directory  + sd.getRoot() +
+  (oldStorageID == null ?  : ( to replace  + oldStorageID)));
+  return true;
 }
+return false;
   }
 
   /**
@@ -677,20 +687,25 @@ public class DataStorage extends Storage {
   + sd.getRoot().getCanonicalPath() + : namenode clusterID = 
   + nsInfo.getClusterID() + ; datanode clusterID =  + 
getClusterID());
 }
-
-// After addition of the federation feature, ctime check is only 
-// meaningful at BlockPoolSliceStorage level. 
 
-// regular start up. 
+// Clusters previously upgraded from layout versions earlier than
+// ADD_DATANODE_AND_STORAGE_UUIDS failed to correctly generate a
+// new storage ID. We check for that and fix it now.
+boolean haveValidStorageId =
+DataNodeLayoutVersion.supports(
+LayoutVersion.Feature.ADD_DATANODE_AND_STORAGE_UUIDS, 
layoutVersion) 
+DatanodeStorage.isValidStorageId(sd.getStorageUuid());
+
+// regular start up.
 if (this.layoutVersion == HdfsConstants.DATANODE_LAYOUT_VERSION) {
-  createStorageID(sd);
+  createStorageID(sd, !haveValidStorageId);
   return; // regular startup
 }
-
+
 // do upgrade
 if (this.layoutVersion  

[18/43] hadoop git commit: HDFS-7763. fix zkfc hung issue due to not catching exception in a corner case. Contributed by Liang Xie.

2015-08-14 Thread sjlee
HDFS-7763. fix zkfc hung issue due to not catching exception in a corner case. 
Contributed by Liang Xie.

(cherry picked from commit 7105ebaa9f370db04962a1e19a67073dc080433b)
(cherry picked from commit efb7e287f45c6502f293456034a37d9209a917be)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fd70e4db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fd70e4db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fd70e4db

Branch: refs/heads/sjlee/hdfs-merge
Commit: fd70e4db105e140fc3d60042abb3f598c9afd13f
Parents: d5ddc34
Author: Andrew Wang w...@apache.org
Authored: Tue Feb 24 15:31:13 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 23:25:12 2015 -0700

--
 .../apache/hadoop/hdfs/tools/DFSZKFailoverController.java   | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fd70e4db/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
index a42b1e3..85f77f1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
@@ -176,8 +176,13 @@ public class DFSZKFailoverController extends 
ZKFailoverController {
 new HdfsConfiguration(), args);
 DFSZKFailoverController zkfc = DFSZKFailoverController.create(
 parser.getConfiguration());
-
-System.exit(zkfc.run(parser.getRemainingArgs()));
+int retCode = 0;
+try {
+  retCode = zkfc.run(parser.getRemainingArgs());
+} catch (Throwable t) {
+  LOG.fatal(Got a fatal error, exiting now, t);
+}
+System.exit(retCode);
   }
 
   @Override



[23/43] hadoop git commit: HDFS-7587. Edit log corruption can happen if append fails with a quota violation. Contributed by Jing Zhao.

2015-08-14 Thread sjlee
HDFS-7587. Edit log corruption can happen if append fails with a quota 
violation. Contributed by Jing Zhao.

Committed Ming Ma's 2.6 patch.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7f0bb5d3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7f0bb5d3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7f0bb5d3

Branch: refs/heads/sjlee/hdfs-merge
Commit: 7f0bb5d3fe0db2e6b9354c8d8a1b603f2390184f
Parents: c723f3b
Author: Jing Zhao ji...@apache.org
Authored: Wed Mar 18 18:51:14 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 09:02:46 2015 -0700

--
 .../hdfs/server/namenode/FSDirectory.java   |  8 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |  2 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 86 +++-
 .../hdfs/server/namenode/INodesInPath.java  |  4 +
 .../namenode/TestDiskspaceQuotaUpdate.java  | 42 ++
 5 files changed, 119 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f0bb5d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 9ca50c4..95877ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -267,6 +267,10 @@ public class FSDirectory implements Closeable {
 }
   }
 
+  boolean shouldSkipQuotaChecks() {
+return skipQuotaCheck;
+  }
+
   /** Enable quota verification */
   void enableQuotaChecks() {
 skipQuotaCheck = false;
@@ -1738,7 +1742,7 @@ public class FSDirectory implements Closeable {
* update quota of each inode and check to see if quota is exceeded. 
* See {@link #updateCount(INodesInPath, long, long, boolean)}
*/ 
-  private void updateCountNoQuotaCheck(INodesInPath inodesInPath,
+  void updateCountNoQuotaCheck(INodesInPath inodesInPath,
   int numOfINodes, long nsDelta, long dsDelta) {
 assert hasWriteLock();
 try {
@@ -1877,7 +1881,7 @@ public class FSDirectory implements Closeable {
*  Pass null if a node is not being moved.
* @throws QuotaExceededException if quota limit is exceeded.
*/
-  private static void verifyQuota(INode[] inodes, int pos, long nsDelta,
+  static void verifyQuota(INode[] inodes, int pos, long nsDelta,
   long dsDelta, INode commonAncestor) throws QuotaExceededException {
 if (nsDelta = 0  dsDelta = 0) {
   // if quota is being freed or not being consumed

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f0bb5d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
index 7dfe688..cb5afbb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
@@ -387,7 +387,7 @@ public class FSEditLogLoader {
 for append);
   }
   LocatedBlock lb = fsNamesys.prepareFileForWrite(path,
-  oldFile, addCloseOp.clientName, addCloseOp.clientMachine, false, 
iip.getLatestSnapshotId(), false);
+  iip, addCloseOp.clientName, addCloseOp.clientMachine, false, 
false);
   newFile = INodeFile.valueOf(fsDir.getINode(path),
   path, true);
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f0bb5d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 5541637..c92b431 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -2872,8 +2872,8 @@ public class 

[15/43] hadoop git commit: HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause DataNode to register successfully with only one NameNode.(Contributed by Vinayakumar B)

2015-08-14 Thread sjlee
HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause DataNode 
to register successfully with only one NameNode.(Contributed by Vinayakumar B)

(cherry picked from commit 3d15728ff5301296801e541d9b23bd1687c4adad)
(cherry picked from commit a1bf7aecf7d018c5305fa3bd7a9e3ef9af3155c1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c1e65de5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c1e65de5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c1e65de5

Branch: refs/heads/sjlee/hdfs-merge
Commit: c1e65de57e8ef760586e28cd37397ea9a7ac7944
Parents: 21d8b22
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Feb 10 10:43:08 2015 +0530
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 22:58:34 2015 -0700

--
 .../org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java  | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1e65de5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
index 6bdb68a..62ba1ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hdfs.server.datanode;
 
 import static org.apache.hadoop.util.Time.now;
 
+import java.io.EOFException;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.net.SocketTimeoutException;
@@ -802,6 +803,10 @@ class BPServiceActor implements Runnable {
 // Use returned registration from namenode with updated fields
 bpRegistration = bpNamenode.registerDatanode(bpRegistration);
 break;
+  } catch(EOFException e) {  // namenode might have just restarted
+LOG.info(Problem connecting to server:  + nnAddr +  :
++ e.getLocalizedMessage());
+sleepAndLogInterrupts(1000, connecting to server);
   } catch(SocketTimeoutException e) {  // namenode is busy
 LOG.info(Problem connecting to server:  + nnAddr);
 sleepAndLogInterrupts(1000, connecting to server);



[02/43] hadoop git commit: HDFS-7235. DataNode#transferBlock should report blocks that don't exist using reportBadBlock (yzhang via cmccabe) (cherry picked from commit ac9ab037e9a9b03e4fa9bd471d3ab994

2015-08-14 Thread sjlee
HDFS-7235. DataNode#transferBlock should report blocks that don't exist using 
reportBadBlock (yzhang via cmccabe)
(cherry picked from commit ac9ab037e9a9b03e4fa9bd471d3ab9940beb53fb)

(cherry picked from commit 842a54a5f66e76eb79321b66cc3b8820fe66c5cd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1aa9e34c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1aa9e34c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1aa9e34c

Branch: refs/heads/sjlee/hdfs-merge
Commit: 1aa9e34c5106c496ffd390f6b2c822d387fb1908
Parents: f94aa4d
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Oct 28 16:41:22 2014 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 21:21:17 2015 -0700

--
 .../hadoop/hdfs/server/datanode/DataNode.java   | 59 +++-
 .../UnexpectedReplicaStateException.java| 45 +++
 .../server/datanode/fsdataset/FsDatasetSpi.java | 28 ++
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 54 --
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 46 ---
 .../org/apache/hadoop/hdfs/TestReplication.java | 32 ---
 .../server/datanode/SimulatedFSDataset.java | 43 --
 7 files changed, 267 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1aa9e34c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index fe57bc3..badb845 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -56,7 +56,9 @@ import java.io.BufferedOutputStream;
 import java.io.ByteArrayInputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
+import java.io.EOFException;
 import java.io.FileInputStream;
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
@@ -1773,30 +1775,59 @@ public class DataNode extends ReconfigurableBase
   int getXmitsInProgress() {
 return xmitsInProgress.get();
   }
-
+
+  private void reportBadBlock(final BPOfferService bpos,
+  final ExtendedBlock block, final String msg) {
+FsVolumeSpi volume = getFSDataset().getVolume(block);
+bpos.reportBadBlocks(
+block, volume.getStorageID(), volume.getStorageType());
+LOG.warn(msg);
+  }
+
   private void transferBlock(ExtendedBlock block, DatanodeInfo[] xferTargets,
   StorageType[] xferTargetStorageTypes) throws IOException {
 BPOfferService bpos = getBPOSForBlock(block);
 DatanodeRegistration bpReg = 
getDNRegistrationForBP(block.getBlockPoolId());
-
-if (!data.isValidBlock(block)) {
-  // block does not exist or is under-construction
+
+boolean replicaNotExist = false;
+boolean replicaStateNotFinalized = false;
+boolean blockFileNotExist = false;
+boolean lengthTooShort = false;
+
+try {
+  data.checkBlock(block, block.getNumBytes(), ReplicaState.FINALIZED);
+} catch (ReplicaNotFoundException e) {
+  replicaNotExist = true;
+} catch (UnexpectedReplicaStateException e) {
+  replicaStateNotFinalized = true;
+} catch (FileNotFoundException e) {
+  blockFileNotExist = true;
+} catch (EOFException e) {
+  lengthTooShort = true;
+} catch (IOException e) {
+  // The IOException indicates not being able to access block file,
+  // treat it the same here as blockFileNotExist, to trigger 
+  // reporting it as a bad block
+  blockFileNotExist = true;  
+}
+
+if (replicaNotExist || replicaStateNotFinalized) {
   String errStr = Can't send invalid block  + block;
   LOG.info(errStr);
-  
   bpos.trySendErrorReport(DatanodeProtocol.INVALID_BLOCK, errStr);
   return;
 }
-
-// Check if NN recorded length matches on-disk length 
-long onDiskLength = data.getLength(block);
-if (block.getNumBytes()  onDiskLength) {
-  FsVolumeSpi volume = getFSDataset().getVolume(block);
+if (blockFileNotExist) {
+  // Report back to NN bad block caused by non-existent block file.
+  reportBadBlock(bpos, block, Can't replicate block  + block
+  +  because the block file doesn't exist, or is not accessible);
+  return;
+}
+if (lengthTooShort) {
+  // Check if NN recorded length matches on-disk length 
   // Shorter on-disk len 

[11/43] hadoop git commit: HDFS-7596. NameNode should prune dead storages from storageMap. Contributed by Arpit Agarwal.

2015-08-14 Thread sjlee
HDFS-7596. NameNode should prune dead storages from storageMap. Contributed by 
Arpit Agarwal.

(cherry picked from commit ef3c3a832c2f0c1e5ccdda2ff8ef84902912955f)
(cherry picked from commit 75e4e55e12b2faa521af7c23fddcba06a9ce661d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cc637d6e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cc637d6e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cc637d6e

Branch: refs/heads/sjlee/hdfs-merge
Commit: cc637d6ece64dfeb89e78c7e9766836149e098be
Parents: 96f0813
Author: cnauroth cnaur...@apache.org
Authored: Sat Jan 10 09:18:33 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 22:21:37 2015 -0700

--
 .../blockmanagement/DatanodeDescriptor.java |  42 ++-
 .../blockmanagement/TestBlockManager.java   |   6 +-
 .../TestNameNodePrunesMissingStorages.java  | 121 +++
 3 files changed, 165 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc637d6e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index cdaab64..a407fe8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -418,6 +418,46 @@ public class DatanodeDescriptor extends DatanodeInfo {
 if (checkFailedStorages) {
   updateFailedStorage(failedStorageInfos);
 }
+
+if (storageMap.size() != reports.length) {
+  pruneStorageMap(reports);
+}
+  }
+
+  /**
+   * Remove stale storages from storageMap. We must not remove any storages
+   * as long as they have associated block replicas.
+   */
+  private void pruneStorageMap(final StorageReport[] reports) {
+if (LOG.isDebugEnabled()) {
+  LOG.debug(Number of storages reported in heartbeat= + reports.length +
+; Number of storages in storageMap= + storageMap.size());
+}
+
+HashMapString, DatanodeStorageInfo excessStorages;
+
+synchronized (storageMap) {
+  // Init excessStorages with all known storages.
+  excessStorages = new HashMapString, DatanodeStorageInfo(storageMap);
+
+  // Remove storages that the DN reported in the heartbeat.
+  for (final StorageReport report : reports) {
+excessStorages.remove(report.getStorage().getStorageID());
+  }
+
+  // For each remaining storage, remove it if there are no associated
+  // blocks.
+  for (final DatanodeStorageInfo storageInfo : excessStorages.values()) {
+if (storageInfo.numBlocks() == 0) {
+  storageMap.remove(storageInfo.getStorageID());
+  LOG.info(Removed storage  + storageInfo +  from DataNode + this);
+} else if (LOG.isDebugEnabled()) {
+  // This can occur until all block reports are received.
+  LOG.debug(Deferring removal of stale storage  + storageInfo +
+ with  + storageInfo.numBlocks() +  blocks);
+}
+  }
+}
   }
 
   private void updateFailedStorage(
@@ -749,8 +789,6 @@ public class DatanodeDescriptor extends DatanodeInfo {
 // For backwards compatibility, make sure that the type and
 // state are updated. Some reports from older datanodes do
 // not include these fields so we may have assumed defaults.
-// This check can be removed in the next major release after
-// 2.4.
 storage.updateFromStorage(s);
 storageMap.put(storage.getStorageID(), storage);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc637d6e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index b444ccc..5beb811 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -573,11 +573,13 @@ public class TestBlockManager {
 

[04/43] hadoop git commit: HDFS-7035. Make adding a new data directory to the DataNode an atomic operation and improve error handling (Lei Xu via Colin P. McCabe) (cherry picked from commit a9331fe9b0

2015-08-14 Thread sjlee
HDFS-7035. Make adding a new data directory to the DataNode an atomic operation 
and improve error handling (Lei Xu via Colin P. McCabe)
(cherry picked from commit a9331fe9b071fdcdae0c6c747d7b6b306142e671)

(cherry picked from commit ec2621e907742aad0264c5f533783f0f18565880)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d79a5849
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d79a5849
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d79a5849

Branch: refs/heads/sjlee/hdfs-merge
Commit: d79a584cdb0bc315938b80ed71b4f2dcb720
Parents: 3827a1a
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Thu Oct 30 17:31:23 2014 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 21:31:34 2015 -0700

--
 .../hadoop/hdfs/server/common/Storage.java  |  15 +
 .../hadoop/hdfs/server/common/StorageInfo.java  |   4 +
 .../server/datanode/BlockPoolSliceStorage.java  | 168 +---
 .../hadoop/hdfs/server/datanode/DataNode.java   | 109 --
 .../hdfs/server/datanode/DataStorage.java   | 382 ++-
 .../server/datanode/fsdataset/FsDatasetSpi.java |   6 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 161 +++-
 .../server/datanode/SimulatedFSDataset.java |   7 +-
 .../datanode/TestDataNodeHotSwapVolumes.java| 108 +-
 .../hdfs/server/datanode/TestDataStorage.java   |  26 +-
 .../fsdataset/impl/TestFsDatasetImpl.java   |  27 +-
 11 files changed, 575 insertions(+), 438 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d79a5849/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 735e0c1..14b52ce 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -821,6 +821,21 @@ public abstract class Storage extends StorageInfo {
   }
 
   /**
+   * Returns true if the storage directory on the given directory is already
+   * loaded.
+   * @param root the root directory of a {@link StorageDirectory}
+   * @throws IOException if failed to get canonical path.
+   */
+  protected boolean containsStorageDir(File root) throws IOException {
+for (StorageDirectory sd : storageDirs) {
+  if (sd.getRoot().getCanonicalPath().equals(root.getCanonicalPath())) {
+return true;
+  }
+}
+return false;
+  }
+
+  /**
* Return true if the layout of the given storage directory is from a version
* of Hadoop prior to the introduction of the current and previous
* directories which allow upgrade and rollback.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d79a5849/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
index 50c8044..a3f82ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
@@ -192,6 +192,10 @@ public class StorageInfo {
 namespaceID = nsId;
   }
 
+  public void setServiceLayoutVersion(int lv) {
+this.layoutVersion = lv;
+  }
+
   public int getServiceLayoutVersion() {
 return storageType == NodeType.DATA_NODE ? 
HdfsConstants.DATANODE_LAYOUT_VERSION
 : HdfsConstants.NAMENODE_LAYOUT_VERSION;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d79a5849/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
index 8333bb4..8c819a7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
+++ 

[01/43] hadoop git commit: HDFS-7213. processIncrementalBlockReport performance degradation. Contributed by Eric Payne. (cherry picked from commit e226b5b40d716b6d363c43a8783766b72734e347)

2015-08-14 Thread sjlee
Repository: hadoop
Updated Branches:
  refs/heads/sjlee/hdfs-merge [created] fb1bf424b


HDFS-7213. processIncrementalBlockReport performance degradation.
Contributed by Eric Payne.
(cherry picked from commit e226b5b40d716b6d363c43a8783766b72734e347)

(cherry picked from commit 946463efefec9031cacb21d5a5367acd150ef904)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f94aa4d2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f94aa4d2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f94aa4d2

Branch: refs/heads/sjlee/hdfs-merge
Commit: f94aa4d25c2f96faf5164e807c2c3eb031e9a1fe
Parents: 4239513
Author: Kihwal Lee kih...@apache.org
Authored: Tue Oct 28 14:55:16 2014 -0500
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 21:21:09 2015 -0700

--
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f94aa4d2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 37df223..17112bf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -3093,9 +3093,11 @@ public class BlockManager {
 +  is received from  + nodeID);
   }
 }
-blockLog.debug(*BLOCK* NameNode.processIncrementalBlockReport:  + from 
+if (blockLog.isDebugEnabled()) {
+  blockLog.debug(*BLOCK* NameNode.processIncrementalBlockReport:  + 
from 
 + nodeID +  receiving:  + receiving + ,  +  received:  + received
 + ,  +  deleted:  + deleted);
+}
   }
 
   /**



[24/43] hadoop git commit: HDFS-7929. inotify unable fetch pre-upgrade edit log segments once upgrade starts (Zhe Zhang via Colin P. McCabe)

2015-08-14 Thread sjlee
HDFS-7929. inotify unable fetch pre-upgrade edit log segments once upgrade 
starts (Zhe Zhang via Colin P. McCabe)

(cherry picked from commit 43b41f22411439c5e23629197fb2fde45dcf0f0f)
(cherry picked from commit 219eb22c1571f76df32967a930049d983cbf5024)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03798416
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03798416
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03798416

Branch: refs/heads/sjlee/hdfs-merge
Commit: 03798416bfe27383c52e4d9f632fe9fa168c6e95
Parents: 7f0bb5d
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Wed Mar 18 18:48:54 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 09:47:33 2015 -0700

--
 .../hadoop/hdfs/server/namenode/FSImage.java|  2 +-
 .../server/namenode/FileJournalManager.java |  2 +-
 .../hdfs/server/namenode/NNUpgradeUtil.java | 44 --
 .../org/apache/hadoop/hdfs/TestDFSUpgrade.java  | 48 +++-
 4 files changed, 90 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03798416/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
index 9b72421..51efb51 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
@@ -393,7 +393,7 @@ public class FSImage implements Closeable {
 for (IteratorStorageDirectory it = storage.dirIterator(false); 
it.hasNext();) {
   StorageDirectory sd = it.next();
   try {
-NNUpgradeUtil.doPreUpgrade(sd);
+NNUpgradeUtil.doPreUpgrade(conf, sd);
   } catch (Exception e) {
 LOG.error(Failed to move aside pre-upgrade storage  +
 in image directory  + sd.getRoot(), e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/03798416/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
index 101c42c..2df052b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
@@ -585,7 +585,7 @@ public class FileJournalManager implements JournalManager {
   public void doPreUpgrade() throws IOException {
 LOG.info(Starting upgrade of edits directory  + sd.getRoot());
 try {
- NNUpgradeUtil.doPreUpgrade(sd);
+ NNUpgradeUtil.doPreUpgrade(conf, sd);
 } catch (IOException ioe) {
  LOG.error(Failed to move aside pre-upgrade storage  +
  in image directory  + sd.getRoot(), ioe);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/03798416/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
index 546480d..c63da20 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
@@ -18,10 +18,13 @@
 package org.apache.hadoop.hdfs.server.namenode;
 
 import java.io.File;
+import java.io.FilenameFilter;
 import java.io.IOException;
+import java.util.List;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.server.common.Storage;
 import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
 import org.apache.hadoop.hdfs.server.common.StorageInfo;
@@ -99,15 

[29/43] hadoop git commit: HDFS-7999. FsDatasetImpl#createTemporary sometimes holds the FSDatasetImpl lock for a very long time (sinago via cmccabe)

2015-08-14 Thread sjlee
HDFS-7999. FsDatasetImpl#createTemporary sometimes holds the FSDatasetImpl lock 
for a very long time (sinago via cmccabe)

(cherry picked from commit 28bebc81db8bb6d1bc2574de7564fe4c595cfe09)
(cherry picked from commit a827089905524e10638c783ba908a895d621911d)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c3a3092c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c3a3092c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c3a3092c

Branch: refs/heads/sjlee/hdfs-merge
Commit: c3a3092c37926eca75ea149c4c061742f6599b40
Parents: c6b68a8
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Apr 6 08:54:46 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 11:17:20 2015 -0700

--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 67 +---
 1 file changed, 44 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c3a3092c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index f24d644..e352ea3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1180,30 +1180,51 @@ class FsDatasetImpl implements 
FsDatasetSpiFsVolumeImpl {
   }
 
   @Override // FsDatasetSpi
-  public synchronized ReplicaInPipeline createTemporary(StorageType 
storageType,
-  ExtendedBlock b) throws IOException {
-ReplicaInfo replicaInfo = volumeMap.get(b.getBlockPoolId(), 
b.getBlockId());
-if (replicaInfo != null) {
-  if (replicaInfo.getGenerationStamp()  b.getGenerationStamp()
-   replicaInfo instanceof ReplicaInPipeline) {
-// Stop the previous writer
-((ReplicaInPipeline)replicaInfo)
-  
.stopWriter(datanode.getDnConf().getXceiverStopTimeout());
-invalidate(b.getBlockPoolId(), new Block[]{replicaInfo});
-  } else {
-throw new ReplicaAlreadyExistsException(Block  + b +
- already exists in state  + replicaInfo.getState() +
- and thus cannot be created.);
+  public ReplicaInPipeline createTemporary(
+  StorageType storageType, ExtendedBlock b) throws IOException {
+long startTimeMs = Time.monotonicNow();
+long writerStopTimeoutMs = datanode.getDnConf().getXceiverStopTimeout();
+ReplicaInfo lastFoundReplicaInfo = null;
+do {
+  synchronized (this) {
+ReplicaInfo currentReplicaInfo =
+volumeMap.get(b.getBlockPoolId(), b.getBlockId());
+if (currentReplicaInfo == lastFoundReplicaInfo) {
+  if (lastFoundReplicaInfo != null) {
+invalidate(b.getBlockPoolId(), new Block[] { lastFoundReplicaInfo 
});
+  }
+  FsVolumeImpl v = volumes.getNextVolume(storageType, b.getNumBytes());
+  // create a temporary file to hold block in the designated volume
+  File f = v.createTmpFile(b.getBlockPoolId(), b.getLocalBlock());
+  ReplicaInPipeline newReplicaInfo =
+  new ReplicaInPipeline(b.getBlockId(), b.getGenerationStamp(), v,
+  f.getParentFile(), 0);
+  volumeMap.add(b.getBlockPoolId(), newReplicaInfo);
+  return newReplicaInfo;
+} else {
+  if (!(currentReplicaInfo.getGenerationStamp()  b
+  .getGenerationStamp()  currentReplicaInfo instanceof 
ReplicaInPipeline)) {
+throw new ReplicaAlreadyExistsException(Block  + b
++  already exists in state  + currentReplicaInfo.getState()
++  and thus cannot be created.);
+  }
+  lastFoundReplicaInfo = currentReplicaInfo;
+}
   }
-}
-
-FsVolumeImpl v = volumes.getNextVolume(storageType, b.getNumBytes());
-// create a temporary file to hold block in the designated volume
-File f = v.createTmpFile(b.getBlockPoolId(), b.getLocalBlock());
-ReplicaInPipeline newReplicaInfo = new ReplicaInPipeline(b.getBlockId(), 
-b.getGenerationStamp(), v, f.getParentFile(), 0);
-volumeMap.add(b.getBlockPoolId(), newReplicaInfo);
-return newReplicaInfo;
+
+  // Hang too long, just bail out. This is not supposed to 

[36/43] hadoop git commit: HDFS-7278. Add a command that allows sysadmins to manually trigger full block reports from a DN (cmccabe) (cherry picked from commit baf794dc404ac54f4e8332654eadfac1bebacb8f

2015-08-14 Thread sjlee
HDFS-7278. Add a command that allows sysadmins to manually trigger full block 
reports from a DN (cmccabe)
(cherry picked from commit baf794dc404ac54f4e8332654eadfac1bebacb8f)

(cherry picked from commit 5f3d967aaefa0b20ef1586b4048b8fa5345d2618)

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a776ef5a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a776ef5a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a776ef5a

Branch: refs/heads/sjlee/hdfs-merge
Commit: a776ef5ad2876b9acf6cf89824c306783f7759f1
Parents: 995382c
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Oct 27 09:53:16 2014 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 18:15:50 2015 -0700

--
 .../hadoop/hdfs/client/BlockReportOptions.java  |  59 
 .../hdfs/protocol/ClientDatanodeProtocol.java   |   7 +
 ...tDatanodeProtocolServerSideTranslatorPB.java |  18 +++
 .../ClientDatanodeProtocolTranslatorPB.java |  16 +++
 .../hdfs/server/datanode/BPServiceActor.java|  17 +++
 .../hadoop/hdfs/server/datanode/DataNode.java   |  14 ++
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java  |  53 
 .../src/main/proto/ClientDatanodeProtocol.proto |  10 ++
 .../src/site/apt/HDFSCommands.apt.vm|   6 +
 .../server/datanode/TestTriggerBlockReport.java | 134 +++
 10 files changed, 334 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a776ef5a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/BlockReportOptions.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/BlockReportOptions.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/BlockReportOptions.java
new file mode 100644
index 000..07f4836
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/BlockReportOptions.java
@@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.client;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Options that can be specified when manually triggering a block report.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class BlockReportOptions {
+  private final boolean incremental;
+
+  private BlockReportOptions(boolean incremental) {
+this.incremental = incremental;
+  }
+
+  public boolean isIncremental() {
+return incremental;
+  }
+
+  public static class Factory {
+private boolean incremental = false;
+
+public Factory() {
+}
+
+public Factory setIncremental(boolean incremental) {
+  this.incremental = incremental;
+  return this;
+}
+
+public BlockReportOptions build() {
+  return new BlockReportOptions(incremental);
+}
+  }
+
+  @Override
+  public String toString() {
+return BlockReportOptions{incremental= + incremental + };
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a776ef5a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
index 9cd5ccd..1dcc196 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
@@ -25,6 +25,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import 

[31/43] hadoop git commit: HDFS-8046. Allow better control of getContentSummary. Contributed by Kihwal Lee. (cherry picked from commit 285b31e75e51ec8e3a796c2cb0208739368ca9b8)

2015-08-14 Thread sjlee
HDFS-8046. Allow better control of getContentSummary. Contributed by Kihwal Lee.
(cherry picked from commit 285b31e75e51ec8e3a796c2cb0208739368ca9b8)

(cherry picked from commit 7e622076d41a85fc9a8600fb270564a085f5cd83)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1ef5e0b1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1ef5e0b1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1ef5e0b1

Branch: refs/heads/sjlee/hdfs-merge
Commit: 1ef5e0b18066ca949adcf4c55a41f186c47e7264
Parents: de21de7
Author: Kihwal Lee kih...@apache.org
Authored: Wed Apr 8 15:39:25 2015 -0500
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 15:30:45 2015 -0700

--
 .../main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java   |  4 +++-
 .../server/namenode/ContentSummaryComputationContext.java | 10 +++---
 .../apache/hadoop/hdfs/server/namenode/FSDirectory.java   | 10 +-
 3 files changed, 19 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ef5e0b1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index fd313bb..85b740e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -272,7 +272,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String  DFS_LIST_LIMIT = dfs.ls.limit;
   public static final int DFS_LIST_LIMIT_DEFAULT = 1000;
   public static final String  DFS_CONTENT_SUMMARY_LIMIT_KEY = 
dfs.content-summary.limit;
-  public static final int DFS_CONTENT_SUMMARY_LIMIT_DEFAULT = 0;
+  public static final int DFS_CONTENT_SUMMARY_LIMIT_DEFAULT = 5000;
+  public static final String  DFS_CONTENT_SUMMARY_SLEEP_MICROSEC_KEY = 
dfs.content-summary.sleep-microsec;
+  public static final longDFS_CONTENT_SUMMARY_SLEEP_MICROSEC_DEFAULT = 500;
   public static final String  DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY = 
dfs.datanode.failed.volumes.tolerated;
   public static final int DFS_DATANODE_FAILED_VOLUMES_TOLERATED_DEFAULT = 
0;
   public static final String  DFS_DATANODE_SYNCONCLOSE_KEY = 
dfs.datanode.synconclose;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ef5e0b1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
index dab64ec..17e16ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
@@ -29,6 +29,8 @@ public class ContentSummaryComputationContext {
   private long nextCountLimit = 0;
   private long limitPerRun = 0;
   private long yieldCount = 0;
+  private long sleepMilliSec = 0;
+  private int sleepNanoSec = 0;
 
   /**
* Constructor
@@ -40,17 +42,19 @@ public class ContentSummaryComputationContext {
*no limit (i.e. no yielding)
*/
   public ContentSummaryComputationContext(FSDirectory dir,
-  FSNamesystem fsn, long limitPerRun) {
+  FSNamesystem fsn, long limitPerRun, long sleepMicroSec) {
 this.dir = dir;
 this.fsn = fsn;
 this.limitPerRun = limitPerRun;
 this.nextCountLimit = limitPerRun;
 this.counts = Content.Counts.newInstance();
+this.sleepMilliSec = sleepMicroSec/1000;
+this.sleepNanoSec = (int)((sleepMicroSec%1000)*1000);
   }
 
   /** Constructor for blocking computation. */
   public ContentSummaryComputationContext() {
-this(null, null, 0);
+this(null, null, 0, 1000);
   }
 
   /** Return current yield count */
@@ -101,7 +105,7 @@ public class ContentSummaryComputationContext {
 

[38/43] hadoop git commit: HDFS-8404. Pending block replication can get stuck using older genstamp. Contributed by Nathan Roberts. (cherry picked from commit 8860e352c394372e4eb3ebdf82ea899567f34e4e)

2015-08-14 Thread sjlee
HDFS-8404. Pending block replication can get stuck using older genstamp. 
Contributed by Nathan Roberts.
(cherry picked from commit 8860e352c394372e4eb3ebdf82ea899567f34e4e)

(cherry picked from commit 536b9ee6d6e5b8430fda23cbdcfd859c299fa8ad)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d5e60fa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d5e60fa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d5e60fa

Branch: refs/heads/sjlee/hdfs-merge
Commit: 2d5e60fa12a62463cd54f1b6b0fcb2ccdbd82c42
Parents: 470019e
Author: Kihwal Lee kih...@apache.org
Authored: Tue May 19 13:06:48 2015 -0500
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 18:37:38 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 17 ++--
 .../blockmanagement/TestPendingReplication.java | 98 +++-
 2 files changed, 105 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d5e60fa/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index bb54402..bcf50b5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1695,13 +1695,18 @@ public class BlockManager {
   namesystem.writeLock();
   try {
 for (int i = 0; i  timedOutItems.length; i++) {
+  /*
+   * Use the blockinfo from the blocksmap to be certain we're working
+   * with the most up-to-date block information (e.g. genstamp).
+   */
+  BlockInfo bi = blocksMap.getStoredBlock(timedOutItems[i]);
+  if (bi == null) {
+continue;
+  }
   NumberReplicas num = countNodes(timedOutItems[i]);
-  if (isNeededReplication(timedOutItems[i], 
getReplication(timedOutItems[i]),
- num.liveReplicas())) {
-neededReplications.add(timedOutItems[i],
-   num.liveReplicas(),
-   num.decommissionedReplicas(),
-   getReplication(timedOutItems[i]));
+  if (isNeededReplication(bi, getReplication(bi), num.liveReplicas())) 
{
+neededReplications.add(bi, num.liveReplicas(),
+num.decommissionedReplicas(), getReplication(bi));
   }
 }
   } finally {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d5e60fa/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
index c63badc..085d5de 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
@@ -42,6 +42,7 @@ import 
org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo;
 import 
org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo.BlockStatus;
 import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks;
 import org.junit.Test;
+import org.mockito.Mockito;
 
 /**
  * This class tests the internals of PendingReplicationBlocks.java,
@@ -52,13 +53,11 @@ public class TestPendingReplication {
   private static final int DFS_REPLICATION_INTERVAL = 1;
   // Number of datanodes in the cluster
   private static final int DATANODE_COUNT = 5;
-
   @Test
   public void testPendingReplication() {
 PendingReplicationBlocks pendingReplications;
 pendingReplications = new PendingReplicationBlocks(TIMEOUT * 1000);
 pendingReplications.start();
-
 //
 // Add 10 blocks to pendingReplications.
 //
@@ -140,8 +139,7 @@ public class TestPendingReplication {

[41/43] hadoop git commit: HDFS-8270. create() always retried with hardcoded timeout when file already exists with open lease (Contributed by J.Andreina)

2015-08-14 Thread sjlee
HDFS-8270. create() always retried with hardcoded timeout when file already 
exists with open lease (Contributed by J.Andreina)

(cherry picked from commit 54f83d9bd917e8641e902c5f0695e65ded472f9a)
(cherry picked from commit 066e45bcb667bb0c37ef70fd297b24e4f26383eb)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db40aecd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db40aecd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db40aecd

Branch: refs/heads/sjlee/hdfs-merge
Commit: db40aecd8b0acf0ff054541dabf5113b542041e5
Parents: fad2a06
Author: Vinayakumar B vinayakum...@apache.org
Authored: Wed Jun 3 12:11:46 2015 +0530
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 23:52:01 2015 -0700

--
 .../org/apache/hadoop/hdfs/NameNodeProxies.java | 16 
 .../org/apache/hadoop/hdfs/TestFileCreation.java|  3 +--
 2 files changed, 1 insertion(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db40aecd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
index b261220..8da00b8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
@@ -42,7 +42,6 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSClient.Conf;
-import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB;
@@ -68,7 +67,6 @@ import org.apache.hadoop.io.retry.RetryProxy;
 import org.apache.hadoop.io.retry.RetryUtils;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
-import org.apache.hadoop.ipc.RemoteException;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.RefreshUserMappingsProtocol;
 import org.apache.hadoop.security.SecurityUtil;
@@ -425,22 +423,8 @@ public class NameNodeProxies {
 
 if (withRetries) { // create the proxy with retries
 
-  RetryPolicy createPolicy = RetryPolicies
-  .retryUpToMaximumCountWithFixedSleep(5,
-  HdfsConstants.LEASE_SOFTLIMIT_PERIOD, TimeUnit.MILLISECONDS);
-
-  MapClass? extends Exception, RetryPolicy remoteExceptionToPolicyMap 
- = new HashMapClass? extends Exception, RetryPolicy();
-  remoteExceptionToPolicyMap.put(AlreadyBeingCreatedException.class,
-  createPolicy);
-
-  RetryPolicy methodPolicy = RetryPolicies.retryByRemoteException(
-  defaultPolicy, remoteExceptionToPolicyMap);
   MapString, RetryPolicy methodNameToPolicyMap 
  = new HashMapString, RetryPolicy();
-
-  methodNameToPolicyMap.put(create, methodPolicy);
-
   ClientProtocol translatorProxy =
 new ClientNamenodeProtocolTranslatorPB(proxy);
   return (ClientProtocol) RetryProxy.create(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db40aecd/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
index 3a399f3..8e88b62 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
@@ -408,9 +408,8 @@ public class TestFileCreation {
 GenericTestUtils.assertExceptionContains(already being created by,
 abce);
   }
-  // NameNodeProxies' createNNProxyWithClientProtocol has 5 retries.
   assertCounter(AlreadyBeingCreatedExceptionNumOps,
-  6L, getMetrics(metricsName));
+  1L, getMetrics(metricsName));
   FSDataOutputStream stm2 = fs2.create(p, true);
   stm2.write(2);
   stm2.close();



[42/43] hadoop git commit: HDFS-8480. Fix performance and timeout issues in HDFS-7929 by using hard-links to preserve old edit logs, instead of copying them. (Zhe Zhang via Colin P. McCabe)

2015-08-14 Thread sjlee
HDFS-8480. Fix performance and timeout issues in HDFS-7929 by using hard-links 
to preserve old edit logs, instead of copying them. (Zhe Zhang via Colin P. 
McCabe)

(cherry picked from commit 7b424f938c3c306795d574792b086d84e4f06425)
(cherry picked from commit cbd11681ce8a51d187d91748b67a708681e599de)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e1b4e69b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e1b4e69b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e1b4e69b

Branch: refs/heads/sjlee/hdfs-merge
Commit: e1b4e69bf23022af3125e1c6dc4ac05c89e1418f
Parents: db40aec
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Jun 22 14:37:10 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 23:59:46 2015 -0700

--
 .../hdfs/server/namenode/NNUpgradeUtil.java   | 18 ++
 1 file changed, 2 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1b4e69b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
index c01b11d..a4d9580 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNUpgradeUtil.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.namenode;
 import java.io.File;
 import java.io.FilenameFilter;
 import java.io.IOException;
+import java.nio.file.Files;
 import java.util.List;
 
 import org.apache.commons.logging.Log;
@@ -130,23 +131,8 @@ public abstract class NNUpgradeUtil {
 
 for (String s : fileNameList) {
   File prevFile = new File(tmpDir, s);
-  Preconditions.checkState(prevFile.canRead(),
-  Edits log file  + s +  is not readable.);
   File newFile = new File(curDir, prevFile.getName());
-  Preconditions.checkState(newFile.createNewFile(),
-  Cannot create new edits log file in  + curDir);
-  EditLogFileInputStream in = new EditLogFileInputStream(prevFile);
-  EditLogFileOutputStream out =
-  new EditLogFileOutputStream(conf, newFile, 512*1024);
-  FSEditLogOp logOp = in.nextValidOp();
-  while (logOp != null) {
-out.write(logOp);
-logOp = in.nextOp();
-  }
-  out.setReadyToFlush();
-  out.flushAndSync(true);
-  out.close();
-  in.close();
+  Files.createLink(newFile.toPath(), prevFile.toPath());
 }
   }
 



[27/43] hadoop git commit: HDFS-7960. The full block report should prune zombie storages even if they're not empty. Contributed by Colin McCabe and Eddy Xu.

2015-08-14 Thread sjlee
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu.

(cherry picked from commit 50ee8f4e67a66aa77c5359182f61f3e951844db6)
(cherry picked from commit 2f46ee50bd4efc82ba3d30bd36f7637ea9d9714e)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03d4af39
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03d4af39
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03d4af39

Branch: refs/heads/sjlee/hdfs-merge
Commit: 03d4af39e794dc03d764122077b434d658b6405e
Parents: 4c64877
Author: Andrew Wang w...@apache.org
Authored: Mon Mar 23 22:00:34 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 10:54:26 2015 -0700

--
 .../DatanodeProtocolClientSideTranslatorPB.java |   5 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |   4 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  15 +++
 .../server/blockmanagement/BlockManager.java|  55 +++-
 .../blockmanagement/DatanodeDescriptor.java |  51 ++-
 .../blockmanagement/DatanodeStorageInfo.java|  13 +-
 .../hdfs/server/datanode/BPServiceActor.java|  34 +++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |  12 +-
 .../server/protocol/BlockReportContext.java |  52 +++
 .../hdfs/server/protocol/DatanodeProtocol.java  |  10 +-
 .../src/main/proto/DatanodeProtocol.proto   |  14 ++
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../TestNameNodePrunesMissingStorages.java  | 135 ++-
 .../server/datanode/BlockReportTestBase.java|   4 +-
 .../server/datanode/TestBPOfferService.java |  10 +-
 .../TestBlockHasMultipleReplicasOnSameDN.java   |   4 +-
 .../datanode/TestDataNodeVolumeFailure.java |   3 +-
 .../TestDatanodeProtocolRetryPolicy.java|   4 +-
 ...TestDnRespectsBlockReportSplitThreshold.java |   7 +-
 .../TestNNHandlesBlockReportPerStorage.java |   7 +-
 .../TestNNHandlesCombinedBlockReport.java   |   4 +-
 .../server/namenode/NNThroughputBenchmark.java  |   9 +-
 .../hdfs/server/namenode/TestDeadDatanode.java  |   4 +-
 .../hdfs/server/namenode/ha/TestDNFencing.java  |   4 +-
 24 files changed, 422 insertions(+), 46 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03d4af39/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
index 46023ec..e169d0e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
@@ -46,6 +46,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.ReportBadBlo
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageBlockReportProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageReceivedDeletedBlocksProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.VersionRequestProto;
+import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeCommand;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
@@ -156,7 +157,8 @@ public class DatanodeProtocolClientSideTranslatorPB 
implements
 
   @Override
   public DatanodeCommand 

[12/43] hadoop git commit: HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown command. Contributed by Eric Payne. (cherry picked from commit 6bbf9fdd041d2413dd78e2bce51abae1

2015-08-14 Thread sjlee
HDFS-7533. Datanode sometimes does not shutdown on receiving upgrade shutdown 
command. Contributed by Eric Payne.
(cherry picked from commit 6bbf9fdd041d2413dd78e2bce51abae15f3334c2)

(cherry picked from commit 33534a0c9aef5024aa6f340e7ee24930c8fa8ed5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e9a28251
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e9a28251
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e9a28251

Branch: refs/heads/sjlee/hdfs-merge
Commit: e9a28251ee46e64e1b99b2dd54b0432bdc0b9578
Parents: cc637d6
Author: Kihwal Lee kih...@apache.org
Authored: Mon Jan 12 15:38:17 2015 -0600
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 22:22:58 2015 -0700

--
 .../hadoop/hdfs/server/datanode/DataNode.java   | 10 +++---
 .../hdfs/server/datanode/TestDataNodeExit.java  | 16 
 2 files changed, 23 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9a28251/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 3dc0c3b..3ecc4a2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1627,9 +1627,13 @@ public class DataNode extends ReconfigurableBase
 // in order to avoid any further acceptance of requests, but the peers
 // for block writes are not closed until the clients are notified.
 if (dataXceiverServer != null) {
-  xserver.sendOOBToPeers();
-  ((DataXceiverServer) this.dataXceiverServer.getRunnable()).kill();
-  this.dataXceiverServer.interrupt();
+  try {
+xserver.sendOOBToPeers();
+((DataXceiverServer) this.dataXceiverServer.getRunnable()).kill();
+this.dataXceiverServer.interrupt();
+  } catch (Throwable e) {
+// Ignore, since the out of band messaging is advisory.
+  }
 }
 
 // Interrupt the checkDiskErrorThread and terminate it.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9a28251/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
index 9d59496..c067b07 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeExit.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hdfs.server.datanode;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.io.IOException;
 
@@ -32,6 +33,7 @@ import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
+import org.mockito.Mockito;
 
 /** 
  * Tests if DataNode process exits if all Block Pool services exit. 
@@ -88,4 +90,18 @@ public class TestDataNodeExit {
 stopBPServiceThreads(2, dn);
 assertFalse(DataNode should exit, dn.isDatanodeUp());
   }
+
+  @Test
+  public void testSendOOBToPeers() throws Exception {
+DataNode dn = cluster.getDataNodes().get(0);
+DataXceiverServer spyXserver = Mockito.spy(dn.getXferServer());
+NullPointerException e = new NullPointerException();
+Mockito.doThrow(e).when(spyXserver).sendOOBToPeers();
+dn.xserver = spyXserver;
+try {
+  dn.shutdown();
+} catch (Throwable t) {
+  fail(DataNode shutdown should not have thrown exception  + t);
+}
+  }
 }



[33/43] hadoop git commit: HDFS-8219. setStoragePolicy with folder behavior is different after cluster restart. (surendra singh lilhore via Xiaoyu Yao)

2015-08-14 Thread sjlee
HDFS-8219. setStoragePolicy with folder behavior is different after cluster 
restart. (surendra singh lilhore via Xiaoyu Yao)

(cherry picked from commit 0100b155019496d077f958904de7d385697d65d9)
(cherry picked from commit e68e8b3b5cff85bfd8bb5b00b9033f63577856d6)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b054cb68
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b054cb68
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b054cb68

Branch: refs/heads/sjlee/hdfs-merge
Commit: b054cb68fa0fc6d1e9e77ac84575731e7d1ec0c7
Parents: b4e227e
Author: Xiaoyu Yao x...@apache.org
Authored: Tue May 5 13:41:14 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 16:05:28 2015 -0700

--
 .../hadoop/hdfs/server/namenode/FSEditLog.java  |  2 +-
 .../hadoop/hdfs/TestBlockStoragePolicy.java | 43 
 2 files changed, 44 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b054cb68/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
index 20aaf07..0154ed9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
@@ -721,7 +721,7 @@ public class FSEditLog implements LogsPurgeable {
   .setClientMachine(
   newNode.getFileUnderConstructionFeature().getClientMachine())
   .setOverwrite(overwrite)
-  .setStoragePolicyId(newNode.getStoragePolicyID());
+  .setStoragePolicyId(newNode.getLocalStoragePolicyID());
 
 AclFeature f = newNode.getAclFeature();
 if (f != null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b054cb68/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
index d053a79..8ac25db 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
@@ -26,6 +26,7 @@ import java.util.*;
 
 import com.google.common.collect.Lists;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.protocol.*;
@@ -1173,4 +1174,46 @@ public class TestBlockStoragePolicy {
   cluster.shutdown();
 }
   }
+
+  @Test
+  public void testGetFileStoragePolicyAfterRestartNN() throws Exception {
+//HDFS8219
+final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+.numDataNodes(REPLICATION)
+.storageTypes(
+new StorageType[] {StorageType.DISK, StorageType.ARCHIVE})
+.build();
+cluster.waitActive();
+final DistributedFileSystem fs = cluster.getFileSystem();
+try {
+  final String file = /testScheduleWithinSameNode/file;
+  Path dir = new Path(/testScheduleWithinSameNode);
+  fs.mkdirs(dir);
+  // 2. Set Dir policy
+  fs.setStoragePolicy(dir, COLD);
+  // 3. Create file
+  final FSDataOutputStream out = fs.create(new Path(file));
+  out.writeChars(testScheduleWithinSameNode);
+  out.close();
+  // 4. Set Dir policy
+  fs.setStoragePolicy(dir, HOT);
+  HdfsFileStatus status = fs.getClient().getFileInfo(file);
+  // 5. get file policy, it should be parent policy.
+  Assert
+  .assertTrue(
+  File storage policy should be HOT,
+  status.getStoragePolicy() == HOT);
+  // 6. restart NameNode for reloading edits logs.
+  cluster.restartNameNode(true);
+  // 7. get file policy, it should be parent policy.
+  status = fs.getClient().getFileInfo(file);
+  Assert
+  .assertTrue(
+  File storage policy should be HOT,
+  status.getStoragePolicy() == HOT);
+
+} finally {
+  cluster.shutdown();
+}
+  }
 }



[30/43] hadoop git commit: HDFS-8072. Reserved RBW space is not released if client terminates while writing block. (Arpit Agarwal)

2015-08-14 Thread sjlee
HDFS-8072. Reserved RBW space is not released if client terminates while 
writing block. (Arpit Agarwal)

(cherry picked from commit f0324738c9db4f45d2b1ec5cfb46c5f2b7669571)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/de21de7e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/de21de7e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/de21de7e

Branch: refs/heads/sjlee/hdfs-merge
Commit: de21de7e2243ef8a89082121d838b88e3c10f05b
Parents: c3a3092
Author: Arpit Agarwal a...@apache.org
Authored: Wed Apr 8 11:38:21 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 11:21:29 2015 -0700

--
 .../hdfs/server/datanode/BlockReceiver.java |  1 +
 .../hdfs/server/datanode/ReplicaInPipeline.java |  6 ++
 .../datanode/ReplicaInPipelineInterface.java|  5 ++
 .../server/datanode/SimulatedFSDataset.java |  4 ++
 .../fsdataset/impl/TestRbwSpaceReservation.java | 67 +---
 5 files changed, 74 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/de21de7e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
index 75f1c36..2a6b46a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
@@ -808,6 +808,7 @@ class BlockReceiver implements Closeable {
   }
 
 } catch (IOException ioe) {
+  replicaInfo.releaseAllBytesReserved();
   if (datanode.isRestarting()) {
 // Do not throw if shutting down for restart. Otherwise, it will cause
 // premature termination of responder.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/de21de7e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
index 6a26640..cc55f85 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
@@ -148,6 +148,12 @@ public class ReplicaInPipeline extends ReplicaInfo
 return bytesReserved;
   }
   
+  @Override
+  public void releaseAllBytesReserved() {  // ReplicaInPipelineInterface
+getVolume().releaseReservedSpace(bytesReserved);
+bytesReserved = 0;
+  }
+
   @Override // ReplicaInPipelineInterface
   public synchronized void setLastChecksumAndDataLen(long dataLength, byte[] 
lastChecksum) {
 this.bytesOnDisk = dataLength;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/de21de7e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
index 7f08b81..0263d0f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
@@ -45,6 +45,11 @@ public interface ReplicaInPipelineInterface extends Replica {
   void setBytesAcked(long bytesAcked);
   
   /**
+   * Release any disk space reserved for this replica.
+   */
+  public void releaseAllBytesReserved();
+
+  /**
* store the checksum for the last chunk along with the data length
* @param dataLength number of bytes on disk
* @param lastChecksum - checksum bytes for the last chunk

http://git-wip-us.apache.org/repos/asf/hadoop/blob/de21de7e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java

[37/43] hadoop git commit: HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock request is in edit log. Contributed by Rushabh S Shah. (cherry picked from commit 2d4ae3d18bc530fa9

2015-08-14 Thread sjlee
HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock 
request is in edit log. Contributed by Rushabh S Shah.
(cherry picked from commit 2d4ae3d18bc530fa9f81ee616db8af3395705fb9)

(cherry picked from commit f264a5aeede7e144af11f5357c7f901993de8e12)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/470019e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/470019e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/470019e9

Branch: refs/heads/sjlee/hdfs-merge
Commit: 470019e9b88e0fcede926442b91d102b595c7ace
Parents: a776ef5
Author: Kihwal Lee kih...@apache.org
Authored: Fri May 8 16:37:26 2015 -0500
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 18:21:24 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 24 -
 .../server/datanode/TestBlockReplacement.java   | 97 
 .../hdfs/server/namenode/ha/TestDNFencing.java  |  4 -
 3 files changed, 118 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/470019e9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index e271d55..bb54402 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2287,8 +2287,15 @@ public class BlockManager {
   if (LOG.isDebugEnabled()) {
 LOG.debug(Processing previouly queued message  + rbi);
   }
-  processAndHandleReportedBlock(rbi.getStorageInfo(), 
-  rbi.getBlock(), rbi.getReportedState(), null);
+  if (rbi.getReportedState() == null) {
+// This is a DELETE_BLOCK request
+DatanodeStorageInfo storageInfo = rbi.getStorageInfo();
+removeStoredBlock(rbi.getBlock(),
+storageInfo.getDatanodeDescriptor());
+  } else {
+processAndHandleReportedBlock(rbi.getStorageInfo(),
+rbi.getBlock(), rbi.getReportedState(), null);
+  }
 }
   }
   
@@ -2984,6 +2991,17 @@ public class BlockManager {
 }
   }
 
+  private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
+  DatanodeDescriptor node) {
+if (shouldPostponeBlocksFromFuture 
+namesystem.isGenStampInFuture(block)) {
+  queueReportedBlock(storageInfo, block, null,
+  QUEUE_REASON_FUTURE_GENSTAMP);
+  return;
+}
+removeStoredBlock(block, node);
+  }
+
   /**
* Modify (block--datanode) map. Possibly generate replication tasks, if the
* removed block is still valid.
@@ -3171,7 +3189,7 @@ public class BlockManager {
 for (ReceivedDeletedBlockInfo rdbi : srdb.getBlocks()) {
   switch (rdbi.getStatus()) {
   case DELETED_BLOCK:
-removeStoredBlock(rdbi.getBlock(), node);
+removeStoredBlock(storageInfo, rdbi.getBlock(), node);
 deleted++;
 break;
   case RECEIVED_BLOCK:

http://git-wip-us.apache.org/repos/asf/hadoop/blob/470019e9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
index e0d7964..86b77d1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
@@ -42,7 +42,9 @@ import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.apache.hadoop.hdfs.StorageType;
+import org.apache.hadoop.hdfs.client.BlockReportOptions;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
@@ -51,8 +53,11 @@ import 

[22/43] hadoop git commit: HDFS-7830. DataNode does not release the volume lock when adding a volume fails. (Lei Xu via Colin P. McCabe)

2015-08-14 Thread sjlee
HDFS-7830. DataNode does not release the volume lock when adding a volume 
fails. (Lei Xu via Colin P. McCabe)

(cherry picked from commit 5c1036d598051cf6af595740f1ab82092b0b6554)
(cherry picked from commit eefca23e8c5e474de1e25bf2ec8a5b266bbe8cfe)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c723f3b1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c723f3b1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c723f3b1

Branch: refs/heads/sjlee/hdfs-merge
Commit: c723f3b1bd9eab261ab5edca33c4dae5ce3d0d30
Parents: 65ae3e2
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Mar 10 18:20:25 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 00:06:16 2015 -0700

--
 .../hadoop/hdfs/server/common/Storage.java  |  2 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 16 ++-
 .../datanode/TestDataNodeHotSwapVolumes.java| 34 ++
 .../fsdataset/impl/FsDatasetTestUtil.java   | 49 
 .../fsdataset/impl/TestFsDatasetImpl.java   | 41 
 5 files changed, 109 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c723f3b1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 14b52ce..8d0129a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -672,7 +672,7 @@ public abstract class Storage extends StorageInfo {
  */
 public void lock() throws IOException {
   if (isShared()) {
-LOG.info(Locking is disabled);
+LOG.info(Locking is disabled for  + this.root);
 return;
   }
   FileLock newLock = tryLock();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c723f3b1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index cbcf6b8..f24d644 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -46,6 +46,7 @@ import javax.management.NotCompliantMBeanException;
 import javax.management.ObjectName;
 import javax.management.StandardMBean;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Lists;
 import com.google.common.base.Preconditions;
 import org.apache.commons.logging.Log;
@@ -322,6 +323,12 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 LOG.info(Added volume -  + dir + , StorageType:  + storageType);
   }
 
+  @VisibleForTesting
+  public FsVolumeImpl createFsVolume(String storageUuid, File currentDir,
+  StorageType storageType) throws IOException {
+return new FsVolumeImpl(this, storageUuid, currentDir, conf, storageType);
+  }
+
   @Override
   public void addVolume(final StorageLocation location,
   final ListNamespaceInfo nsInfos)
@@ -335,8 +342,8 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 final Storage.StorageDirectory sd = builder.getStorageDirectory();
 
 StorageType storageType = location.getStorageType();
-final FsVolumeImpl fsVolume = new FsVolumeImpl(
-this, sd.getStorageUuid(), sd.getCurrentDir(), this.conf, storageType);
+final FsVolumeImpl fsVolume =
+createFsVolume(sd.getStorageUuid(), sd.getCurrentDir(), storageType);
 final ReplicaMap tempVolumeMap = new ReplicaMap(fsVolume);
 ArrayListIOException exceptions = Lists.newArrayList();
 
@@ -352,6 +359,11 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {

[39/43] hadoop git commit: HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer.

2015-08-14 Thread sjlee
HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer.

(cherry picked from commit 50eeea13000f0c82e0567410f0f8b611248f8c1b)

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd

(cherry picked from commit 25db34127811fbadb9a698fa3a76e24d426fb0f6)

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/77a10e76
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/77a10e76
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/77a10e76

Branch: refs/heads/sjlee/hdfs-merge
Commit: 77a10e76e99c14cd26ebb3664304f6ed9cc7bf65
Parents: 2d5e60f
Author: cnauroth cnaur...@apache.org
Authored: Wed May 27 22:54:00 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 18:41:50 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/77a10e76/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
index 69424ed..453a023 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
@@ -47,7 +47,7 @@ if %1 == --config (
   goto print_usage
   )
 
-  set hdfscommands=dfs namenode secondarynamenode journalnode zkfc datanode 
dfsadmin haadmin fsck balancer jmxget oiv oev fetchdt getconf groups 
snapshotDiff lsSnapshottableDir cacheadmin mover storagepolicies
+  set hdfscommands=dfs namenode secondarynamenode journalnode zkfc datanode 
dfsadmin haadmin fsck balancer jmxget oiv oev fetchdt getconf groups 
snapshotDiff lsSnapshottableDir cacheadmin mover storagepolicies crypto
   for %%i in ( %hdfscommands% ) do (
 if %hdfs-command% == %%i set hdfscommand=true
   )
@@ -159,6 +159,10 @@ goto :eof
   set CLASS=org.apache.hadoop.hdfs.tools.GetStoragePolicies
   goto :eof
 
+:crypto
+  set CLASS=org.apache.hadoop.hdfs.tools.CryptoAdmin
+  goto :eof
+
 @rem This changes %1, %2 etc. Hence those cannot be used after calling this.
 :make_command_arguments
   if %1 == --config (
@@ -207,6 +211,7 @@ goto :eof
   @echo   lsSnapshottableDir   list all snapshottable dirs owned by the 
current user
   @echoUse -help to see options
   @echo   cacheadmin   configure the HDFS cache
+  @echo   crypto   configure HDFS encryption zones
   @echo   moverrun a utility to move block replicas across 
storage types
   @echo   storagepolicies  get all the existing block storage policies
   @echo.



[43/43] hadoop git commit: HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write files rather than the entire DFSClient. (mingma)

2015-08-14 Thread sjlee
HDFS-7314. When the DFSClient lease cannot be renewed, abort open-for-write 
files rather than the entire DFSClient. (mingma)

(cherry picked from commit fbd88f1062f3c4b208724d208e3f501eb196dfab)
(cherry picked from commit 516bbf1c20547dc513126df0d9f0934bb65c10c7)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fb1bf424
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fb1bf424
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fb1bf424

Branch: refs/heads/sjlee/hdfs-merge
Commit: fb1bf424bdad20fff7ab390ce75c4bec558e7e6d
Parents: e1b4e69
Author: Ming Ma min...@apache.org
Authored: Thu Jul 16 12:33:57 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Fri Aug 14 00:06:13 2015 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 16 +
 .../org/apache/hadoop/hdfs/LeaseRenewer.java| 12 +++-
 .../hadoop/hdfs/TestDFSClientRetries.java   | 66 +++-
 3 files changed, 76 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb1bf424/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index ad24a0d..20f9d00 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -903,23 +903,9 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   void closeConnectionToNamenode() {
 RPC.stopProxy(namenode);
   }
-  
-  /** Abort and release resources held.  Ignore all errors. */
-  void abort() {
-clientRunning = false;
-closeAllFilesBeingWritten(true);
-try {
-  // remove reference to this client and stop the renewer,
-  // if there is no more clients under the renewer.
-  getLeaseRenewer().closeClient(this);
-} catch (IOException ioe) {
-   LOG.info(Exception occurred while aborting the client  + ioe);
-}
-closeConnectionToNamenode();
-  }
 
   /** Close/abort all files being written. */
-  private void closeAllFilesBeingWritten(final boolean abort) {
+  public void closeAllFilesBeingWritten(final boolean abort) {
 for(;;) {
   final long inodeId;
   final DFSOutputStream out;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb1bf424/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
index f8f337c..855b539 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
@@ -211,6 +211,12 @@ class LeaseRenewer {
 return renewal;
   }
 
+  /** Used for testing only. */
+  @VisibleForTesting
+  public synchronized void setRenewalTime(final long renewal) {
+this.renewal = renewal;
+  }
+
   /** Add a client. */
   private synchronized void addClient(final DFSClient dfsc) {
 for(DFSClient c : dfsclients) {
@@ -450,8 +456,12 @@ class LeaseRenewer {
   + (elapsed/1000) +  seconds.  Aborting ..., ie);
   synchronized (this) {
 while (!dfsclients.isEmpty()) {
-  dfsclients.get(0).abort();
+  DFSClient dfsClient = dfsclients.get(0);
+  dfsClient.closeAllFilesBeingWritten(true);
+  closeClient(dfsClient);
 }
+//Expire the current LeaseRenewer thread.
+emptyTime = 0;
   }
   break;
 } catch (IOException ie) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb1bf424/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
index 382ad48..0a39cb5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
+++ 

[34/43] hadoop git commit: HDFS-7980. Incremental BlockReport will dramatically slow down namenode startup. Contributed by Walter Su

2015-08-14 Thread sjlee
HDFS-7980. Incremental BlockReport will dramatically slow down namenode 
startup.  Contributed by Walter Su

(cherry picked from commit 4e1f2eb3955a97a70cf127dc97ae49201a90f5e0)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5a28c6a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5a28c6a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5a28c6a3

Branch: refs/heads/sjlee/hdfs-merge
Commit: 5a28c6a37cab5f1061b6ed9536341da537d51b5a
Parents: b054cb6
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Thu May 7 11:36:35 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 16:32:37 2015 -0700

--
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5a28c6a3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index e5d97d1..e271d55 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1815,7 +1815,7 @@ public class BlockManager {
 return !node.hasStaleStorages();
   }
 
-  if (storageInfo.numBlocks() == 0) {
+  if (storageInfo.getBlockReportCount() == 0) {
 // The first block report can be processed a lot more efficiently than
 // ordinary block reports.  This shortens restart times.
 processFirstBlockReport(storageInfo, newReport);
@@ -2038,7 +2038,7 @@ public class BlockManager {
   final BlockListAsLongs report) throws IOException {
 if (report == null) return;
 assert (namesystem.hasWriteLock());
-assert (storageInfo.numBlocks() == 0);
+assert (storageInfo.getBlockReportCount() == 0);
 BlockReportIterator itBR = report.getBlockReportIterator();
 
 while(itBR.hasNext()) {
@@ -2451,14 +2451,14 @@ public class BlockManager {
 }
 
 // just add it
-storageInfo.addBlock(storedBlock);
+boolean added = storageInfo.addBlock(storedBlock);
 
 // Now check for completion of blocks and safe block count
 int numCurrentReplica = countLiveNodes(storedBlock);
 if (storedBlock.getBlockUCState() == BlockUCState.COMMITTED
  numCurrentReplica = minReplication) {
   completeBlock(storedBlock.getBlockCollection(), storedBlock, false);
-} else if (storedBlock.isComplete()) {
+} else if (storedBlock.isComplete()  added) {
   // check whether safe replication is reached for the block
   // only complete blocks are counted towards that.
   // In the case that the block just became complete above, completeBlock()



[28/43] hadoop git commit: HDFS-7742. Favoring decommissioning node for replication can cause a block to stay underreplicated for long periods. Contributed by Nathan Roberts. (cherry picked from commi

2015-08-14 Thread sjlee
HDFS-7742. Favoring decommissioning node for replication can cause a block to 
stay
underreplicated for long periods. Contributed by Nathan Roberts.
(cherry picked from commit 04ee18ed48ceef34598f954ff40940abc9fde1d2)

(cherry picked from commit c4cedfc1d601127430c70ca8ca4d4e2ee2d1003d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c6b68a82
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c6b68a82
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c6b68a82

Branch: refs/heads/sjlee/hdfs-merge
Commit: c6b68a82adea8de488b255594d35db8e01f5fc8f
Parents: 03d4af3
Author: Kihwal Lee kih...@apache.org
Authored: Mon Mar 30 10:11:25 2015 -0500
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 10:58:04 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 10 ++---
 .../blockmanagement/TestBlockManager.java   | 42 
 2 files changed, 47 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6b68a82/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 69f3e46..e5d97d1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1652,7 +1652,8 @@ public class BlockManager {
   // If so, do not select the node as src node
   if ((nodesCorrupt != null)  nodesCorrupt.contains(node))
 continue;
-  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY
+  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
+   !node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
@@ -1667,13 +1668,12 @@ public class BlockManager {
   // never use already decommissioned nodes
   if(node.isDecommissioned())
 continue;
-  // we prefer nodes that are in DECOMMISSION_INPROGRESS state
-  if(node.isDecommissionInProgress() || srcNode == null) {
+
+  // We got this far, current node is a reasonable choice
+  if (srcNode == null) {
 srcNode = node;
 continue;
   }
-  if(srcNode.isDecommissionInProgress())
-continue;
   // switch to a different node randomly
   // this to prevent from deterministically selecting the same node even
   // if the node failed to replicate the block on previous iterations

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6b68a82/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index ddb6143..7eec52d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -536,6 +536,48 @@ public class TestBlockManager {
   }
 
   @Test
+  public void testFavorDecomUntilHardLimit() throws Exception {
+bm.maxReplicationStreams = 0;
+bm.replicationStreamsHardLimit = 1;
+
+long blockId = 42; // arbitrary
+Block aBlock = new Block(blockId, 0, 0);
+ListDatanodeDescriptor origNodes = getNodes(0, 1);
+// Add the block to the first node.
+addBlockOnNodes(blockId,origNodes.subList(0,1));
+origNodes.get(0).startDecommission();
+
+ListDatanodeDescriptor cntNodes = new LinkedListDatanodeDescriptor();
+ListDatanodeStorageInfo liveNodes = new 
LinkedListDatanodeStorageInfo();
+
+assertNotNull(Chooses decommissioning source node for a normal 
replication
++  if all available source nodes have reached their replication
++  limits below the hard limit.,
+bm.chooseSourceDatanode(
+aBlock,
+cntNodes,
+liveNodes,
+new NumberReplicas(),
+UnderReplicatedBlocks.QUEUE_UNDER_REPLICATED));
+
+
+// Increase the replication count to test replication count  hard limit
+

[25/43] hadoop git commit: HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu)

2015-08-14 Thread sjlee
HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu)

(cherry picked from commit 90164ffd84f6ef56e9f8f99dcc7424a8d115dbae)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2c9a7461
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2c9a7461
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2c9a7461

Branch: refs/heads/sjlee/hdfs-merge
Commit: 2c9a7461ec2ceba5885e95bc79f8dcbfd198df60
Parents: 0379841
Author: yliu y...@apache.org
Authored: Thu Mar 19 23:24:55 2015 +0800
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 09:58:07 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 41 
 .../hdfs/server/namenode/FSNamesystem.java  |  8 +++-
 2 files changed, 47 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c9a7461/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index d26cc52..5a38351 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1931,6 +1931,47 @@ public class BlockManager {
   }
 
   /**
+   * Mark block replicas as corrupt except those on the storages in 
+   * newStorages list.
+   */
+  public void markBlockReplicasAsCorrupt(BlockInfo block, 
+  long oldGenerationStamp, long oldNumBytes, 
+  DatanodeStorageInfo[] newStorages) throws IOException {
+assert namesystem.hasWriteLock();
+BlockToMarkCorrupt b = null;
+if (block.getGenerationStamp() != oldGenerationStamp) {
+  b = new BlockToMarkCorrupt(block, oldGenerationStamp,
+  genstamp does not match  + oldGenerationStamp
+  +  :  + block.getGenerationStamp(), Reason.GENSTAMP_MISMATCH);
+} else if (block.getNumBytes() != oldNumBytes) {
+  b = new BlockToMarkCorrupt(block,
+  length does not match  + oldNumBytes
+  +  :  + block.getNumBytes(), Reason.SIZE_MISMATCH);
+} else {
+  return;
+}
+
+for (DatanodeStorageInfo storage : getStorages(block)) {
+  boolean isCorrupt = true;
+  if (newStorages != null) {
+for (DatanodeStorageInfo newStorage : newStorages) {
+  if (newStorage!= null  storage.equals(newStorage)) {
+isCorrupt = false;
+break;
+  }
+}
+  }
+  if (isCorrupt) {
+blockLog.info(BLOCK* markBlockReplicasAsCorrupt: mark block replica +
+b +  on  + storage.getDatanodeDescriptor() +
+ as corrupt because the dn is not in the new committed  +
+storage list.);
+markBlockAsCorrupt(b, storage, storage.getDatanodeDescriptor());
+  }
+}
+  }
+
+  /**
* processFirstBlockReport is intended only for processing initial block
* reports, the first block report received from a DN after it registers.
* It just adds all the valid replicas to the datanode, without calculating 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2c9a7461/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index c92b431..fa52981 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4791,6 +4791,8 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
   throw new IOException(Block (= + lastblock + ) not found);
 }
   }
+  final long oldGenerationStamp = storedBlock.getGenerationStamp();
+  final long oldNumBytes = storedBlock.getNumBytes();
   //
   // The 

[35/43] hadoop git commit: HDFS-7894. Rolling upgrade readiness is not updated in jmx until query command is issued. Contributed by Brahma Reddy Battula. (cherry picked from commit 6f622672b62aa8d7190

2015-08-14 Thread sjlee
HDFS-7894. Rolling upgrade readiness is not updated in jmx until query command 
is issued. Contributed by Brahma Reddy Battula.
(cherry picked from commit 6f622672b62aa8d719060063ef0e47480cdc8655)

(cherry picked from commit 802a5775f3522c57c60ae29ecb9533dbbfecfe76)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/995382c5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/995382c5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/995382c5

Branch: refs/heads/sjlee/hdfs-merge
Commit: 995382c5234ad6c07f327e5d1f2a1c7e391a0b60
Parents: 5a28c6a
Author: Kihwal Lee kih...@apache.org
Authored: Fri May 8 09:32:07 2015 -0500
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 16:35:27 2015 -0700

--
 .../hdfs/server/namenode/FSNamesystem.java  | 23 ++--
 1 file changed, 21 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/995382c5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 5f396f7..2c6a65d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -8417,11 +8417,30 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 
   @Override  // NameNodeMXBean
   public RollingUpgradeInfo.Bean getRollingUpgradeStatus() {
+if (!isRollingUpgrade()) {
+  return null;
+}
 RollingUpgradeInfo upgradeInfo = getRollingUpgradeInfo();
-if (upgradeInfo != null) {
+if (upgradeInfo.createdRollbackImages()) {
   return new RollingUpgradeInfo.Bean(upgradeInfo);
 }
-return null;
+readLock();
+try {
+  // check again after acquiring the read lock.
+  upgradeInfo = getRollingUpgradeInfo();
+  if (upgradeInfo == null) {
+return null;
+  }
+  if (!upgradeInfo.createdRollbackImages()) {
+boolean hasRollbackImage = this.getFSImage().hasRollbackFSImage();
+upgradeInfo.setCreatedRollbackImages(hasRollbackImage);
+  }
+} catch (IOException ioe) {
+  LOG.warn(Encountered exception setting Rollback Image, ioe);
+} finally {
+  readUnlock();
+}
+return new RollingUpgradeInfo.Bean(upgradeInfo);
   }
 
   /** Is rolling upgrade in progress? */



[40/43] hadoop git commit: HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. Contributed by Ming Ma.

2015-08-14 Thread sjlee
HDFS-7609. Avoid retry cache collision when Standby NameNode loading edits. 
Contributed by Ming Ma.

(cherry picked from commit 7817674a3a4d097b647dd77f1345787dd376d5ea)
(cherry picked from commit 17fb442a4c4e43105374c97fccd68dd966729a19)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fad2a062
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fad2a062
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fad2a062

Branch: refs/heads/sjlee/hdfs-merge
Commit: fad2a062ddbb955a42dd5a90d64781617287f8df
Parents: 77a10e7
Author: Jing Zhao ji...@apache.org
Authored: Fri May 29 11:05:13 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Thu Aug 13 23:33:31 2015 -0700

--
 .../hdfs/server/namenode/FSNamesystem.java  | 18 --
 .../hdfs/server/namenode/NameNodeRpcServer.java | 20 +++
 .../namenode/ha/TestRetryCacheWithHA.java   | 37 ++--
 3 files changed, 55 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad2a062/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 2c6a65d..19edbb5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -2003,7 +2003,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 
 HdfsFileStatus resultingStat = null;
 FSPermissionChecker pc = getPermissionChecker();
-checkOperation(OperationCategory.WRITE);
 waitForLoadingFSImage();
 writeLock();
 try {
@@ -2563,7 +2562,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 boolean skipSync = false;
 HdfsFileStatus stat = null;
 FSPermissionChecker pc = getPermissionChecker();
-checkOperation(OperationCategory.WRITE);
 if (blockSize  minBlockSize) {
   throw new IOException(Specified block size is less than configured +
minimum value ( + DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY
@@ -3137,7 +3135,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 
 LocatedBlock lb = null;
 FSPermissionChecker pc = getPermissionChecker();
-checkOperation(OperationCategory.WRITE);
 byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
 writeLock();
 try {
@@ -3806,7 +3803,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
   throw new IOException(Invalid name:  + dst);
 }
 FSPermissionChecker pc = getPermissionChecker();
-checkOperation(OperationCategory.WRITE);
 byte[][] srcComponents = FSDirectory.getPathComponentsForReservedPath(src);
 byte[][] dstComponents = FSDirectory.getPathComponentsForReservedPath(dst);
 boolean status = false;
@@ -3879,7 +3875,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 }
 final FSPermissionChecker pc = getPermissionChecker();
 
-checkOperation(OperationCategory.WRITE);
 CacheEntry cacheEntry = RetryCache.waitForCompletion(retryCache);
 if (cacheEntry != null  cacheEntry.isSuccess()) {
   return; // Return previous response
@@ -4003,7 +3998,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
 ListINode removedINodes = new ChunkedArrayListINode();
 FSPermissionChecker pc = getPermissionChecker();
-checkOperation(OperationCategory.WRITE);
 byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
 boolean ret = false;
 
@@ -7048,7 +7042,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
   void updatePipeline(String clientName, ExtendedBlock oldBlock, 
   ExtendedBlock newBlock, DatanodeID[] newNodes, String[] newStorageIDs)
   throws IOException {
-checkOperation(OperationCategory.WRITE);
 CacheEntry cacheEntry = RetryCache.waitForCompletion(retryCache);
 if (cacheEntry != null  cacheEntry.isSuccess()) {
   return; // Return previous response
@@ -8141,7 +8134,6 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
*/
   

[1/4] hadoop git commit: HDFS-7235. DataNode#transferBlock should report blocks that don't exist using reportBadBlock (yzhang via cmccabe) (cherry picked from commit ac9ab037e9a9b03e4fa9bd471d3ab9940b

2015-08-14 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2c0063d42 - c2a9c3929
  refs/heads/branch-2.6 1c1d86733 - 33e559d75
  refs/heads/branch-2.7 5ba406534 - f40714f8d
  refs/heads/trunk d25cb8fe1 - f2b4bc9b6


HDFS-7235. DataNode#transferBlock should report blocks that don't exist using 
reportBadBlock (yzhang via cmccabe)
(cherry picked from commit ac9ab037e9a9b03e4fa9bd471d3ab9940beb53fb)

(cherry picked from commit 842a54a5f66e76eb79321b66cc3b8820fe66c5cd)

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/33e559d7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/33e559d7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/33e559d7

Branch: refs/heads/branch-2.6
Commit: 33e559d75c44b304e840b43349cbd1a87fa8a2e0
Parents: 1c1d867
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Oct 28 16:41:22 2014 -0700
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:35:07 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hadoop/hdfs/server/datanode/DataNode.java   | 59 +++-
 .../UnexpectedReplicaStateException.java| 45 +++
 .../server/datanode/fsdataset/FsDatasetSpi.java | 28 ++
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 54 --
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 46 ---
 .../org/apache/hadoop/hdfs/TestReplication.java | 32 ---
 .../server/datanode/SimulatedFSDataset.java | 43 --
 8 files changed, 270 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/33e559d7/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 59c5acd..10fd981 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -38,6 +38,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7213. processIncrementalBlockReport performance degradation.
 (Eric Payne via kihwal)
 
+HDFS-7235. DataNode#transferBlock should report blocks that don't exist
+using reportBadBlock (yzhang via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/33e559d7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index fe57bc3..badb845 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -56,7 +56,9 @@ import java.io.BufferedOutputStream;
 import java.io.ByteArrayInputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
+import java.io.EOFException;
 import java.io.FileInputStream;
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
@@ -1773,30 +1775,59 @@ public class DataNode extends ReconfigurableBase
   int getXmitsInProgress() {
 return xmitsInProgress.get();
   }
-
+
+  private void reportBadBlock(final BPOfferService bpos,
+  final ExtendedBlock block, final String msg) {
+FsVolumeSpi volume = getFSDataset().getVolume(block);
+bpos.reportBadBlocks(
+block, volume.getStorageID(), volume.getStorageType());
+LOG.warn(msg);
+  }
+
   private void transferBlock(ExtendedBlock block, DatanodeInfo[] xferTargets,
   StorageType[] xferTargetStorageTypes) throws IOException {
 BPOfferService bpos = getBPOSForBlock(block);
 DatanodeRegistration bpReg = 
getDNRegistrationForBP(block.getBlockPoolId());
-
-if (!data.isValidBlock(block)) {
-  // block does not exist or is under-construction
+
+boolean replicaNotExist = false;
+boolean replicaStateNotFinalized = false;
+boolean blockFileNotExist = false;
+boolean lengthTooShort = false;
+
+try {
+  data.checkBlock(block, block.getNumBytes(), ReplicaState.FINALIZED);
+} catch (ReplicaNotFoundException e) {
+  replicaNotExist = true;
+} catch (UnexpectedReplicaStateException e) {
+  replicaStateNotFinalized = true;
+} catch (FileNotFoundException e) {
+  blockFileNotExist = true;
+} catch (EOFException e) {
+  lengthTooShort = true;
+} catch (IOException e) 

[2/4] hadoop git commit: HDFS-7235. DataNode#transferBlock should report blocks that don't exist using reportBadBlock (yzhang via cmccabe) Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7235. DataNode#transferBlock should report blocks that don't exist using 
reportBadBlock (yzhang via cmccabe)
Moved CHANGES.txt entry to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f2b4bc9b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f2b4bc9b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f2b4bc9b

Branch: refs/heads/trunk
Commit: f2b4bc9b6a1bd3f9dbfc4e85c1b9bde238da3627
Parents: d25cb8f
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:37:39 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:37:39 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f2b4bc9b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1f72264..e4e2896 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1819,9 +1819,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7301. TestMissingBlocksAlert should use MXBeans instead of old web UI.
 (Zhe Zhang via wheat9)
 
-HDFS-7235. DataNode#transferBlock should report blocks that don't exist
-using reportBadBlock (yzhang via cmccabe)
-
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
@@ -2339,6 +2336,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7213. processIncrementalBlockReport performance degradation.
 (Eric Payne via kihwal)
 
+HDFS-7235. DataNode#transferBlock should report blocks that don't exist
+using reportBadBlock (yzhang via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[3/4] hadoop git commit: HDFS-7235. DataNode#transferBlock should report blocks that don't exist using reportBadBlock (yzhang via cmccabe) Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7235. DataNode#transferBlock should report blocks that don't exist using 
reportBadBlock (yzhang via cmccabe)
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit f2b4bc9b6a1bd3f9dbfc4e85c1b9bde238da3627)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c2a9c392
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c2a9c392
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c2a9c392

Branch: refs/heads/branch-2
Commit: c2a9c3929eed756b7fd6ac20713530ed89be9f3e
Parents: 2c0063d
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:37:39 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:38:03 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c2a9c392/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 60bc4cb..869791c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1486,9 +1486,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7301. TestMissingBlocksAlert should use MXBeans instead of old web UI.
 (Zhe Zhang via wheat9)
 
-HDFS-7235. DataNode#transferBlock should report blocks that don't exist
-using reportBadBlock (yzhang via cmccabe)
-
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
@@ -2014,6 +2011,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7213. processIncrementalBlockReport performance degradation.
 (Eric Payne via kihwal)
 
+HDFS-7235. DataNode#transferBlock should report blocks that don't exist
+using reportBadBlock (yzhang via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[4/4] hadoop git commit: HDFS-7235. DataNode#transferBlock should report blocks that don't exist using reportBadBlock (yzhang via cmccabe) Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread vinayakumarb
HDFS-7235. DataNode#transferBlock should report blocks that don't exist using 
reportBadBlock (yzhang via cmccabe)
Moved CHANGES.txt entry to 2.6.1

(cherry picked from commit f2b4bc9b6a1bd3f9dbfc4e85c1b9bde238da3627)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f40714f8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f40714f8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f40714f8

Branch: refs/heads/branch-2.7
Commit: f40714f8d89b9499131a868f19bbcc8e303d8d0e
Parents: 5ba4065
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:37:39 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:38:28 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f40714f8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index efa00af..a851e70 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -688,9 +688,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7301. TestMissingBlocksAlert should use MXBeans instead of old web UI.
 (Zhe Zhang via wheat9)
 
-HDFS-7235. DataNode#transferBlock should report blocks that don't exist
-using reportBadBlock (yzhang via cmccabe)
-
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
@@ -1209,6 +1206,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7213. processIncrementalBlockReport performance degradation.
 (Eric Payne via kihwal)
 
+HDFS-7235. DataNode#transferBlock should report blocks that don't exist
+using reportBadBlock (yzhang via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[17/43] hadoop git commit: HDFS-7009. Active NN and standby NN have different live nodes. Contributed by Ming Ma.

2015-08-14 Thread sjlee
HDFS-7009. Active NN and standby NN have different live nodes. Contributed by 
Ming Ma.

(cherry picked from commit 769507bd7a501929d9a2fd56c72c3f50673488a4)
(cherry picked from commit 657a6e389b3f6eae43efb11deb6253c3b1255a51)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d5ddc345
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d5ddc345
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d5ddc345

Branch: refs/heads/sjlee/hdfs-merge
Commit: d5ddc3450f2f49ea411de590ff3de15b5ec4e17c
Parents: 1faa44d
Author: cnauroth cnaur...@apache.org
Authored: Mon Feb 23 15:12:27 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 23:19:33 2015 -0700

--
 .../main/java/org/apache/hadoop/ipc/Client.java |   3 +-
 .../TestDatanodeProtocolRetryPolicy.java| 231 +++
 2 files changed, 233 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5ddc345/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 96da01c..8a98eb0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -25,6 +25,7 @@ import java.io.BufferedOutputStream;
 import java.io.ByteArrayOutputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
+import java.io.EOFException;
 import java.io.FilterInputStream;
 import java.io.IOException;
 import java.io.InputStream;
@@ -279,7 +280,7 @@ public class Client {
   /** Check the rpc response header. */
   void checkResponse(RpcResponseHeaderProto header) throws IOException {
 if (header == null) {
-  throw new IOException(Response is null.);
+  throw new EOFException(Response is null.);
 }
 if (header.hasClientId()) {
   // check client IDs

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5ddc345/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
new file mode 100644
index 000..c7ed5b9
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
@@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.EOFException;
+import java.io.File;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+
+import com.google.common.base.Supplier;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.logging.impl.Log4JLogger;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import 

[07/43] hadoop git commit: HDFS-7531. Improve the concurrent access on FsVolumeList (Lei Xu via Colin P. McCabe) (cherry picked from commit 3b173d95171d01ab55042b1162569d1cf14a8d43)

2015-08-14 Thread sjlee
HDFS-7531. Improve the concurrent access on FsVolumeList (Lei Xu via Colin P. 
McCabe)
(cherry picked from commit 3b173d95171d01ab55042b1162569d1cf14a8d43)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java

(cherry picked from commit dda1fc169db2e69964cca746be4ff8965eb8b56f)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ba28192f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ba28192f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ba28192f

Branch: refs/heads/sjlee/hdfs-merge
Commit: ba28192f9d5a8385283bd717bca494e6981d378f
Parents: 418bd16
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Wed Dec 17 16:41:59 2014 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 22:11:55 2015 -0700

--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  28 ++--
 .../datanode/fsdataset/impl/FsVolumeList.java   | 138 +--
 .../fsdataset/impl/TestFsDatasetImpl.java   |  70 +-
 3 files changed, 174 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba28192f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index e7fa6d7..0d9f096 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -127,7 +127,7 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
 
   @Override // FsDatasetSpi
   public ListFsVolumeImpl getVolumes() {
-return volumes.volumes;
+return volumes.getVolumes();
   }
 
   @Override
@@ -140,9 +140,10 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl {
   throws IOException {
 StorageReport[] reports;
 synchronized (statsLock) {
-  reports = new StorageReport[volumes.volumes.size()];
+  ListFsVolumeImpl curVolumes = getVolumes();
+  reports = new StorageReport[curVolumes.size()];
   int i = 0;
-  for (FsVolumeImpl volume : volumes.volumes) {
+  for (FsVolumeImpl volume : curVolumes) {
 reports[i++] = new StorageReport(volume.toDatanodeStorage(),
  false,
  volume.getCapacity(),
@@ -1322,7 +1323,8 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl 
{
 MapString, ArrayListReplicaInfo uc =
 new HashMapString, ArrayListReplicaInfo();
 
-for (FsVolumeSpi v : volumes.volumes) {
+ListFsVolumeImpl curVolumes = getVolumes();
+for (FsVolumeSpi v : curVolumes) {
   finalized.put(v.getStorageID(), new ArrayListReplicaInfo());
   uc.put(v.getStorageID(), new ArrayListReplicaInfo());
 }
@@ -1349,7 +1351,7 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl 
{
   }
 }
 
-for (FsVolumeSpi v : volumes.volumes) {
+for (FsVolumeImpl v : curVolumes) {
   ArrayListReplicaInfo finalizedList = finalized.get(v.getStorageID());
   ArrayListReplicaInfo ucList = uc.get(v.getStorageID());
   blockReportsMap.put(((FsVolumeImpl) v).toDatanodeStorage(),
@@ -,7 +2224,7 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl 
{
 
   private CollectionVolumeInfo getVolumeInfo() {
 CollectionVolumeInfo info = new ArrayListVolumeInfo();
-for (FsVolumeImpl volume : volumes.volumes) {
+for (FsVolumeImpl volume : getVolumes()) {
   long used = 0;
   long free = 0;
   try {
@@ -2256,8 +2258,9 @@ class FsDatasetImpl implements FsDatasetSpiFsVolumeImpl 
{
   @Override //FsDatasetSpi
   public synchronized void deleteBlockPool(String bpid, boolean force)
   throws IOException {
+ListFsVolumeImpl curVolumes = getVolumes();
 if (!force) {
-  for (FsVolumeImpl volume : volumes.volumes) {
+  for (FsVolumeImpl volume : curVolumes) {
 if (!volume.isBPDirEmpty(bpid)) {
   LOG.warn(bpid + 

[09/43] hadoop git commit: reverted CHANGES.txt for HDFS-7225.

2015-08-14 Thread sjlee
reverted CHANGES.txt for HDFS-7225.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/084674aa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/084674aa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/084674aa

Branch: refs/heads/sjlee/hdfs-merge
Commit: 084674aa28e841a68d97cec98289d1ad137ece6c
Parents: 33fb7b4
Author: Sangjin Lee sj...@apache.org
Authored: Wed Aug 12 22:16:27 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 22:16:27 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/084674aa/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index cc4d2ab..47ec910 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -35,9 +35,6 @@ Release 2.6.1 - UNRELEASED
 
 HDFS-8486. DN startup may cause severe data loss. (daryn via cmccabe)
 
-HDFS-7225. Remove stale block invalidation work when DN re-registers with
-different UUID. (Zhe Zhang and Andrew Wang)
-
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[16/43] hadoop git commit: HDFS-7788. Post-2.6 namenode may not start up with an image containing inodes created with an old release. Contributed by Rushabh Shah. (cherry picked from commit 7ae5255a16

2015-08-14 Thread sjlee
HDFS-7788. Post-2.6 namenode may not start up with an image containing inodes 
created with an old release. Contributed by Rushabh Shah.
(cherry picked from commit 7ae5255a1613ccfb43646f33eabacf1062c86e93)

(cherry picked from commit b9157f92fc3e008e4f3029f8feeaf6acb52eb76f)

Conflicts:
  
hadoop-hdfs-project/hadoop-hdfs/src/site/resources/image-with-zero-block-size.tar.gz
  
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1faa44d8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1faa44d8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1faa44d8

Branch: refs/heads/sjlee/hdfs-merge
Commit: 1faa44d8f4d7b944e99dd0470ea2638c7653a131
Parents: c1e65de
Author: Kihwal Lee kih...@apache.org
Authored: Fri Feb 20 09:09:56 2015 -0600
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 23:15:07 2015 -0700

--
 .../hadoop/hdfs/server/namenode/INodeFile.java  |   3 ++
 .../apache/hadoop/hdfs/util/LongBitFormat.java  |   4 ++
 .../resources/image-with-zero-block-size.tar.gz | Bin 0 - 1378 bytes
 .../hdfs/server/namenode/TestFSImage.java   |  48 +++
 4 files changed, 55 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1faa44d8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index 5136f8b..1dd6da3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -103,6 +103,9 @@ public class INodeFile extends INodeWithAdditionalFields
 static long toLong(long preferredBlockSize, short replication,
 byte storagePolicyID) {
   long h = 0;
+  if (preferredBlockSize == 0) {
+preferredBlockSize = PREFERRED_BLOCK_SIZE.BITS.getMin();
+  }
   h = PREFERRED_BLOCK_SIZE.BITS.combine(preferredBlockSize, h);
   h = REPLICATION.BITS.combine(replication, h);
   h = STORAGE_POLICY_ID.BITS.combine(storagePolicyID, h);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1faa44d8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
index 863d9f7..9399d84 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
@@ -64,4 +64,8 @@ public class LongBitFormat implements Serializable {
 }
 return (record  ~MASK) | (value  OFFSET);
   }
+  
+  public long getMin() {
+return MIN;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1faa44d8/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/image-with-zero-block-size.tar.gz
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/image-with-zero-block-size.tar.gz
 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/image-with-zero-block-size.tar.gz
new file mode 100644
index 000..41f3105
Binary files /dev/null and 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/resources/image-with-zero-block-size.tar.gz
 differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1faa44d8/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
index f21834e..d19980c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
@@ -28,10 +28,13 @@ import org.junit.Assert;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
 import 

[06/43] hadoop git commit: HDFS-7446. HDFS inotify should have the ability to determine what txid it has read up to (cmccabe) (cherry picked from commit 75a326aaff8c92349701d9b3473c3070b8c2be44)

2015-08-14 Thread sjlee
HDFS-7446. HDFS inotify should have the ability to determine what txid it has 
read up to (cmccabe)
(cherry picked from commit 75a326aaff8c92349701d9b3473c3070b8c2be44)

(cherry picked from commit 06552a15d5172a2b0ad3d61aa7f9a849857385aa)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/418bd16e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/418bd16e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/418bd16e

Branch: refs/heads/sjlee/hdfs-merge
Commit: 418bd16eaea26e647318db74fd2f42c0d5758a3c
Parents: 014d07d
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Nov 25 17:44:34 2014 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 21:46:01 2015 -0700

--
 .../hadoop/hdfs/DFSInotifyEventInputStream.java |  65 ++--
 .../apache/hadoop/hdfs/inotify/EventBatch.java  |  41 +++
 .../hadoop/hdfs/inotify/EventBatchList.java |  63 
 .../apache/hadoop/hdfs/inotify/EventsList.java  |  63 
 .../hadoop/hdfs/protocol/ClientProtocol.java|   8 +-
 .../ClientNamenodeProtocolTranslatorPB.java |   4 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 341 ++-
 .../namenode/InotifyFSEditLogOpTranslator.java  |  74 ++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |  23 +-
 .../hadoop-hdfs/src/main/proto/inotify.proto|  10 +-
 .../hdfs/TestDFSInotifyEventInputStream.java| 209 +++-
 11 files changed, 513 insertions(+), 388 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/418bd16e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
index 73c5f55..83b92b9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
@@ -19,11 +19,10 @@
 package org.apache.hadoop.hdfs;
 
 import com.google.common.collect.Iterators;
-import com.google.common.util.concurrent.UncheckedExecutionException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.hdfs.inotify.Event;
-import org.apache.hadoop.hdfs.inotify.EventsList;
+import org.apache.hadoop.hdfs.inotify.EventBatch;
+import org.apache.hadoop.hdfs.inotify.EventBatchList;
 import org.apache.hadoop.hdfs.inotify.MissingEventsException;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.util.Time;
@@ -33,13 +32,7 @@ import org.slf4j.LoggerFactory;
 import java.io.IOException;
 import java.util.Iterator;
 import java.util.Random;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
-import java.util.concurrent.TimeoutException;
 
 /**
  * Stream for reading inotify events. DFSInotifyEventInputStreams should not
@@ -52,7 +45,7 @@ public class DFSInotifyEventInputStream {
   .class);
 
   private final ClientProtocol namenode;
-  private IteratorEvent it;
+  private IteratorEventBatch it;
   private long lastReadTxid;
   /**
* The most recent txid the NameNode told us it has sync'ed -- helps us
@@ -78,22 +71,22 @@ public class DFSInotifyEventInputStream {
   }
 
   /**
-   * Returns the next event in the stream or null if no new events are 
currently
-   * available.
+   * Returns the next batch of events in the stream or null if no new
+   * batches are currently available.
*
* @throws IOException because of network error or edit log
* corruption. Also possible if JournalNodes are unresponsive in the
* QJM setting (even one unresponsive JournalNode is enough in rare cases),
* so catching this exception and retrying at least a few times is
* recommended.
-   * @throws MissingEventsException if we cannot return the next event in the
-   * stream because the data for the event (and possibly some subsequent 
events)
-   * has been deleted (generally because this stream is a very large number of
-   * events behind the current state of the NameNode). It is safe to continue
-   * reading from the stream after this exception is thrown -- the next
-   * available event will be returned.
+   * @throws 

[05/43] hadoop git commit: HDFS-7225. Remove stale block invalidation work when DN re-registers with different UUID. (Zhe Zhang and Andrew Wang)

2015-08-14 Thread sjlee
HDFS-7225. Remove stale block invalidation work when DN re-registers with 
different UUID. (Zhe Zhang and Andrew Wang)

(cherry picked from commit 406c09ad1150c4971c2b7675fcb0263d40517fbf)
(cherry picked from commit 2e15754a92c6589308ccbbb646166353cc2f2456)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/014d07de
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/014d07de
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/014d07de

Branch: refs/heads/sjlee/hdfs-merge
Commit: 014d07de2e9b39be4b6793f0e09fcf8548570ad5
Parents: d79a584
Author: Andrew Wang w...@apache.org
Authored: Tue Nov 18 22:14:04 2014 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 21:32:30 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../server/blockmanagement/BlockManager.java|  21 ++-
 .../server/blockmanagement/DatanodeManager.java |   2 +
 .../TestComputeInvalidateWork.java  | 167 +++
 4 files changed, 156 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/014d07de/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 47ec910..cc4d2ab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -35,6 +35,9 @@ Release 2.6.1 - UNRELEASED
 
 HDFS-8486. DN startup may cause severe data loss. (daryn via cmccabe)
 
+HDFS-7225. Remove stale block invalidation work when DN re-registers with
+different UUID. (Zhe Zhang and Andrew Wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/014d07de/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 17112bf..d26cc52 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1112,6 +1112,18 @@ public class BlockManager {
   }
 
   /**
+   * Remove all block invalidation tasks under this datanode UUID;
+   * used when a datanode registers with a new UUID and the old one
+   * is wiped.
+   */
+  void removeFromInvalidates(final DatanodeInfo datanode) {
+if (!namesystem.isPopulatingReplQueues()) {
+  return;
+}
+invalidateBlocks.remove(datanode);
+  }
+
+  /**
* Mark the block belonging to datanode as corrupt
* @param blk Block to be marked as corrupt
* @param dn Datanode which holds the corrupt replica
@@ -3395,7 +3407,14 @@ public class BlockManager {
 return 0;
   }
   try {
-toInvalidate = 
invalidateBlocks.invalidateWork(datanodeManager.getDatanode(dn));
+DatanodeDescriptor dnDescriptor = datanodeManager.getDatanode(dn);
+if (dnDescriptor == null) {
+  LOG.warn(DataNode  + dn +  cannot be found with UUID  +
+  dn.getDatanodeUuid() + , removing block invalidation work.);
+  invalidateBlocks.remove(dn);
+  return 0;
+}
+toInvalidate = invalidateBlocks.invalidateWork(dnDescriptor);
 
 if (toInvalidate == null) {
   return 0;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/014d07de/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 6a52349..80965b9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -593,6 +593,8 @@ public class DatanodeManager {
 synchronized (datanodeMap) {
   host2DatanodeMap.remove(datanodeMap.remove(key));
 }
+// Also remove all block invalidation tasks under this node
+blockManager.removeFromInvalidates(new DatanodeInfo(node));
 if (LOG.isDebugEnabled()) {
   

[19/43] hadoop git commit: HDFS-7871. NameNodeEditLogRoller can keep printing 'Swallowing exception' message. Contributed by Jing Zhao.

2015-08-14 Thread sjlee
HDFS-7871. NameNodeEditLogRoller can keep printing 'Swallowing exception' 
message. Contributed by Jing Zhao.

(cherry picked from commit b442aeec95abfa1c6f835a116dfe6e186b0d841d)
(cherry picked from commit 6090f51725e2b44d794433ed72a1901fae2ba7e3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e1af1ac4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e1af1ac4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e1af1ac4

Branch: refs/heads/sjlee/hdfs-merge
Commit: e1af1ac4e91d36b21df18ce5627e1f69f27f0776
Parents: fd70e4d
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 2 20:22:04 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 23:30:57 2015 -0700

--
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1af1ac4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 8e5a2db..5541637 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -5203,14 +5203,16 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 + rollThreshold);
 rollEditLog();
   }
+} catch (Exception e) {
+  FSNamesystem.LOG.error(Swallowing exception in 
+  + NameNodeEditLogRoller.class.getSimpleName() + :, e);
+}
+try {
   Thread.sleep(sleepIntervalMs);
 } catch (InterruptedException e) {
   FSNamesystem.LOG.info(NameNodeEditLogRoller.class.getSimpleName()
   +  was interrupted, exiting);
   break;
-} catch (Exception e) {
-  FSNamesystem.LOG.error(Swallowing exception in 
-  + NameNodeEditLogRoller.class.getSimpleName() + :, e);
 }
   }
 }



[10/43] hadoop git commit: HDFS-7182. JMX metrics aren't accessible when NN is busy. Contributed by Ming Ma.

2015-08-14 Thread sjlee
HDFS-7182. JMX metrics aren't accessible when NN is busy. Contributed by Ming 
Ma.

(cherry picked from commit 4b589e7cfa27bd042e228bbbcf1c3b75b2aeaa57)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/96f0813c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/96f0813c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/96f0813c

Branch: refs/heads/sjlee/hdfs-merge
Commit: 96f0813c5d6140aabe7b2837f30971936276e689
Parents: 084674a
Author: Jing Zhao ji...@apache.org
Authored: Fri Jan 9 17:35:57 2015 -0800
Committer: Sangjin Lee sj...@apache.org
Committed: Wed Aug 12 22:19:28 2015 -0700

--
 .../hdfs/server/namenode/FSNamesystem.java  | 15 ++---
 .../server/namenode/TestFSNamesystemMBean.java  | 69 +---
 2 files changed, 23 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/96f0813c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 9e38195..7077b68 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -421,7 +421,7 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 
   private String nameserviceId;
 
-  private RollingUpgradeInfo rollingUpgradeInfo = null;
+  private volatile RollingUpgradeInfo rollingUpgradeInfo = null;
   /**
* A flag that indicates whether the checkpointer should checkpoint a 
rollback
* fsimage. The edit log tailer sets this flag. The checkpoint will create a
@@ -8355,16 +8355,11 @@ public class FSNamesystem implements Namesystem, 
FSClusterStats,
 
   @Override  // NameNodeMXBean
   public RollingUpgradeInfo.Bean getRollingUpgradeStatus() {
-readLock();
-try {
-  RollingUpgradeInfo upgradeInfo = getRollingUpgradeInfo();
-  if (upgradeInfo != null) {
-return new RollingUpgradeInfo.Bean(upgradeInfo);
-  }
-  return null;
-} finally {
-  readUnlock();
+RollingUpgradeInfo upgradeInfo = getRollingUpgradeInfo();
+if (upgradeInfo != null) {
+  return new RollingUpgradeInfo.Bean(upgradeInfo);
 }
+return null;
   }
 
   /** Is rolling upgrade in progress? */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/96f0813c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java
index 39e1165..c044fb0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java
@@ -17,11 +17,16 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertNotNull;
 
 import java.lang.management.ManagementFactory;
+import java.util.HashSet;
 import java.util.Map;
+import java.util.Set;
 
+import javax.management.MBeanAttributeInfo;
+import javax.management.MBeanInfo;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
@@ -51,66 +56,28 @@ public class TestFSNamesystemMBean {
 // come from hadoop metrics framework for the class FSNamesystem.
 ObjectName mxbeanNamefsn = new ObjectName(
 Hadoop:service=NameNode,name=FSNamesystem);
-Integer blockCapacity = (Integer) (mbs.getAttribute(mxbeanNamefsn,
-BlockCapacity));
 
 // Metrics that belong to FSNamesystemState.
 // These are metrics that FSNamesystem registers directly with 
MBeanServer.
 ObjectName mxbeanNameFsns = new ObjectName(
 Hadoop:service=NameNode,name=FSNamesystemState);
-String FSState = (String) (mbs.getAttribute(mxbeanNameFsns,
-FSState));
-Long blocksTotal = (Long) (mbs.getAttribute(mxbeanNameFsns,
-BlocksTotal));
-Long capacityTotal = (Long) 

[4/4] hadoop git commit: HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is full (zhaoyunjiong via cmccabe) Moved to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is 
full (zhaoyunjiong via cmccabe)
Moved to 2.6.1

(cherry picked from commit 05ed69058f22ebeccc58faf0be491c269e950526)

Conflicts:
hadoop-common-project/hadoop-common/CHANGES.txt

(cherry picked from commit 796b94df1ea0269617417bb7889ed4035ed0f5a2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d8d33055
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d8d33055
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d8d33055

Branch: refs/heads/branch-2.7
Commit: d8d33055b08a2368fca2023f811b41aea74810b0
Parents: 90f3641
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:53:46 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:56:25 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8d33055/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8fc969f..5642148 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -441,9 +441,6 @@ Release 2.7.0 - 2015-04-20
 HADOOP-11257. Update hadoop jar documentation to warn against using it
 for launching yarn jars (iwasakims via cmccabe)
 
-HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification
-pipe is full (zhaoyunjiong via cmccabe)
-
 HADOOP-11337. KeyAuthorizationKeyProvider access checks need to be done
 atomically. (Dian Fu via wang)
 
@@ -835,6 +832,9 @@ Release 2.6.1 - UNRELEASED
 
 HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
 
+HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification
+pipe is full (zhaoyunjiong via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[3/4] hadoop git commit: HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is full (zhaoyunjiong via cmccabe) Moved to 2.6.1

2015-08-14 Thread vinayakumarb
HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification pipe is 
full (zhaoyunjiong via cmccabe)
Moved to 2.6.1

(cherry picked from commit 05ed69058f22ebeccc58faf0be491c269e950526)

Conflicts:
hadoop-common-project/hadoop-common/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/796b94df
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/796b94df
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/796b94df

Branch: refs/heads/branch-2
Commit: 796b94df1ea0269617417bb7889ed4035ed0f5a2
Parents: 15760a1
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:53:46 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:55:25 2015 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/796b94df/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 05c5a56..609fc35 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1013,9 +1013,6 @@ Release 2.7.0 - 2015-04-20
 HADOOP-11257. Update hadoop jar documentation to warn against using it
 for launching yarn jars (iwasakims via cmccabe)
 
-HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification
-pipe is full (zhaoyunjiong via cmccabe)
-
 HADOOP-11337. KeyAuthorizationKeyProvider access checks need to be done
 atomically. (Dian Fu via wang)
 
@@ -1407,6 +1404,9 @@ Release 2.6.1 - UNRELEASED
 
 HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. (Stephen Chu via wheat9)
 
+HADOOP-11333. Fix deadlock in DomainSocketWatcher when the notification
+pipe is full (zhaoyunjiong via cmccabe)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[11/50] [abbrv] hadoop git commit: YARN-3039. Implemented the app-level timeline aggregator discovery service. Contributed by Junping Du.

2015-08-14 Thread vinodkv
http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbfc0537/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
new file mode 100644
index 000..eb7beef
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
@@ -0,0 +1,142 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import 
org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos.AppAggregatorsMapProto;
+import 
org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos.ReportNewAggregatorsInfoRequestProto;
+import 
org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos.ReportNewAggregatorsInfoRequestProtoOrBuilder;
+import 
org.apache.hadoop.yarn.server.api.protocolrecords.ReportNewAggregatorsInfoRequest;
+import org.apache.hadoop.yarn.server.api.records.AppAggregatorsMap;
+import 
org.apache.hadoop.yarn.server.api.records.impl.pb.AppAggregatorsMapPBImpl;
+
+public class ReportNewAggregatorsInfoRequestPBImpl extends
+ReportNewAggregatorsInfoRequest {
+
+  ReportNewAggregatorsInfoRequestProto proto = 
+  ReportNewAggregatorsInfoRequestProto.getDefaultInstance();
+  
+  ReportNewAggregatorsInfoRequestProto.Builder builder = null;
+  boolean viaProto = false;
+
+  private ListAppAggregatorsMap aggregatorsList = null;
+
+  public ReportNewAggregatorsInfoRequestPBImpl() {
+builder = ReportNewAggregatorsInfoRequestProto.newBuilder();
+  }
+
+  public ReportNewAggregatorsInfoRequestPBImpl(
+  ReportNewAggregatorsInfoRequestProto proto) {
+this.proto = proto;
+viaProto = true;
+  }
+
+  public ReportNewAggregatorsInfoRequestProto getProto() {
+mergeLocalToProto();
+proto = viaProto ? proto : builder.build();
+viaProto = true;
+return proto;
+  }
+  
+  @Override
+  public int hashCode() {
+return getProto().hashCode();
+  }
+
+  @Override
+  public boolean equals(Object other) {
+if (other == null)
+  return false;
+if (other.getClass().isAssignableFrom(this.getClass())) {
+  return this.getProto().equals(this.getClass().cast(other).getProto());
+}
+return false;
+  }
+
+  private void mergeLocalToProto() {
+if (viaProto)
+  maybeInitBuilder();
+mergeLocalToBuilder();
+proto = builder.build();
+viaProto = true;
+  }
+
+  private void mergeLocalToBuilder() {
+if (aggregatorsList != null) {
+  addLocalAggregatorsToProto();
+}
+  }
+  
+  private void maybeInitBuilder() {
+if (viaProto || builder == null) {
+  builder = ReportNewAggregatorsInfoRequestProto.newBuilder(proto);
+}
+viaProto = false;
+  }
+
+  private void addLocalAggregatorsToProto() {
+maybeInitBuilder();
+builder.clearAppAggregators();
+ListAppAggregatorsMapProto protoList =
+new ArrayListAppAggregatorsMapProto();
+for (AppAggregatorsMap m : this.aggregatorsList) {
+  protoList.add(convertToProtoFormat(m));
+}
+builder.addAllAppAggregators(protoList);
+  }
+
+  private void initLocalAggregatorsList() {
+ReportNewAggregatorsInfoRequestProtoOrBuilder p = viaProto ? proto : 
builder;
+ListAppAggregatorsMapProto aggregatorsList =
+p.getAppAggregatorsList();
+this.aggregatorsList = new ArrayListAppAggregatorsMap();
+for (AppAggregatorsMapProto m : aggregatorsList) {
+  this.aggregatorsList.add(convertFromProtoFormat(m));
+}
+  }
+
+  @Override
+  public 

[13/50] [abbrv] hadoop git commit: YARN-3391. Clearly define flow ID/ flow run / flow version in API and storage. Contributed by Zhijie Shen

2015-08-14 Thread vinodkv
YARN-3391. Clearly define flow ID/ flow run / flow version in API and storage. 
Contributed by Zhijie Shen

(cherry picked from commit 68c6232f8423e55b4d152ef3d1d66aeb2d6a555e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43d27a4d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43d27a4d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43d27a4d

Branch: refs/heads/YARN-2928
Commit: 43d27a4dc3e7b4065ed9c39ae4788148000a72c9
Parents: d7a7e6f
Author: Junping Du junping...@apache.org
Authored: Thu Apr 9 18:04:27 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../applications/distributedshell/Client.java   | 36 +--
 .../distributedshell/TestDistributedShell.java  | 13 +++
 .../yarn/util/timeline/TimelineUtils.java   | 34 +++---
 .../GetTimelineCollectorContextResponse.java| 17 +
 ...tTimelineCollectorContextResponsePBImpl.java | 38 +---
 .../yarn_server_common_service_protos.proto |  5 +--
 .../java/org/apache/hadoop/yarn/TestRPC.java|  7 ++--
 .../collectormanager/NMCollectorService.java|  2 +-
 .../containermanager/ContainerManagerImpl.java  | 18 ++
 .../application/Application.java|  6 ++--
 .../application/ApplicationImpl.java| 27 +-
 .../application/TestApplication.java|  2 +-
 .../yarn/server/nodemanager/webapp/MockApp.java | 23 +---
 .../nodemanager/webapp/TestNMWebServices.java   |  2 +-
 .../server/resourcemanager/ClientRMService.java | 21 +++
 .../resourcemanager/amlauncher/AMLauncher.java  | 30 
 .../TestTimelineServiceClientIntegration.java   |  2 +-
 .../collector/AppLevelTimelineCollector.java| 10 +++---
 .../collector/TimelineCollector.java|  4 +--
 .../collector/TimelineCollectorContext.java | 32 +++--
 .../collector/TimelineCollectorManager.java | 15 
 .../storage/FileSystemTimelineWriterImpl.java   | 13 +++
 .../timelineservice/storage/TimelineWriter.java |  7 ++--
 ...TestPerNodeTimelineCollectorsAuxService.java |  2 +-
 .../collector/TestTimelineCollectorManager.java |  3 +-
 .../TestFileSystemTimelineWriterImpl.java   |  8 +++--
 27 files changed, 256 insertions(+), 124 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43d27a4d/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index ce7425d..0f14e13 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -50,6 +50,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3334. NM uses timeline client to publish container metrics to new
 timeline service. (Junping Du via zjshen)
 
+YARN-3391. Clearly define flow ID/ flow run / flow version in API and 
storage.
+(Zhijie Shen via junping_du)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43d27a4d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
index db69490..ff2f594 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
@@ -185,8 +185,9 @@ public class Client {
   // Timeline domain writer access control
   private String modifyACLs = null;
 
-  private String flowId = null;
-  private String flowRunId = null;
+  private String flowName = null;
+  private String flowVersion = null;
+  private long flowRunId = 0L;
 
   // Command line options
   private Options opts;
@@ -289,9 +290,11 @@ public class Client {
 + modify the timeline entities in the given domain);
 opts.addOption(create, false, Flag to indicate whether to create the 
 + domain specified with -domain.);
-

[19/50] [abbrv] hadoop git commit: YARN-3040. Make putEntities operation be aware of the app's context. Contributed by Zhijie Shen

2015-08-14 Thread vinodkv
YARN-3040. Make putEntities operation be aware of the app's context. 
Contributed by Zhijie Shen

(cherry picked from commit db2f0238915d6e1a5b85c463426b5e072bd4698d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/60203f25
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/60203f25
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/60203f25

Branch: refs/heads/YARN-2928
Commit: 60203f256117ea6e7d2f878cfb7f3a927ccd2723
Parents: ff57048
Author: Junping Du junping...@apache.org
Authored: Thu Mar 26 09:59:32 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |   1 +
 .../applications/distributedshell/Client.java   |  27 +++-
 .../distributedshell/TestDistributedShell.java  | 125 +---
 .../yarn/util/timeline/TimelineUtils.java   |  16 +++
 .../api/CollectorNodemanagerProtocol.java   |  16 +++
 ...ollectorNodemanagerProtocolPBClientImpl.java |  20 +++
 ...llectorNodemanagerProtocolPBServiceImpl.java |  21 +++
 .../GetTimelineCollectorContextRequest.java |  37 +
 .../GetTimelineCollectorContextResponse.java|  46 ++
 ...etTimelineCollectorContextRequestPBImpl.java | 127 +
 ...tTimelineCollectorContextResponsePBImpl.java | 141 +++
 .../proto/collectornodemanager_protocol.proto   |   1 +
 .../yarn_server_common_service_protos.proto |   9 ++
 .../java/org/apache/hadoop/yarn/TestRPC.java|  39 +
 .../collectormanager/NMCollectorService.java|  18 ++-
 .../containermanager/ContainerManagerImpl.java  |  14 +-
 .../application/Application.java|   4 +
 .../application/ApplicationImpl.java|  17 ++-
 .../application/TestApplication.java|   3 +-
 .../yarn/server/nodemanager/webapp/MockApp.java |  10 ++
 .../nodemanager/webapp/TestNMWebServices.java   |   2 +-
 .../resourcemanager/amlauncher/AMLauncher.java  |  23 ++-
 .../timelineservice/RMTimelineCollector.java|   7 +
 .../TestTimelineServiceClientIntegration.java   |  19 ++-
 .../collector/AppLevelTimelineCollector.java|  33 -
 .../PerNodeTimelineCollectorsAuxService.java|   2 +-
 .../collector/TimelineCollector.java|  19 ++-
 .../collector/TimelineCollectorContext.java |  81 +++
 .../collector/TimelineCollectorManager.java |  32 -
 .../collector/TimelineCollectorWebService.java  |   2 +-
 .../storage/FileSystemTimelineWriterImpl.java   |  69 +
 .../timelineservice/storage/TimelineWriter.java |   9 +-
 ...TestPerNodeTimelineCollectorsAuxService.java |  43 --
 .../collector/TestTimelineCollectorManager.java |  41 +-
 .../TestFileSystemTimelineWriterImpl.java   |  22 ++-
 36 files changed, 956 insertions(+), 143 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/60203f25/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 09e9ecb..4816b0d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -41,6 +41,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3034. Implement RM starting its timeline collector. (Naganarasimha G R
 via junping_du)
 
+YARN-3040. Make putEntities operation be aware of the app's context. 
(Zhijie Shen 
+via junping_du)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/60203f25/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 613ffd9..81c8f4c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -124,6 +124,7 @@ public class YarnConfiguration extends Configuration {
   public static final String RM_PREFIX = yarn.resourcemanager.;
 
   public static final String RM_CLUSTER_ID = RM_PREFIX + cluster-id;
+  public static final String DEFAULT_RM_CLUSTER_ID = yarn_cluster;
 
   public static final String RM_HOSTNAME = RM_PREFIX + hostname;
 


[17/50] [abbrv] hadoop git commit: YARN-3431. Sub resources of timeline entity needs to be passed to a separate endpoint. Contributed By Zhijie Shen.

2015-08-14 Thread vinodkv
YARN-3431. Sub resources of timeline entity needs to be passed to a separate 
endpoint. Contributed By Zhijie Shen.

(cherry picked from commit fa5cc75245a6dba549620a8b26c7b4a8aed9838e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f0bc3390
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f0bc3390
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f0bc3390

Branch: refs/heads/YARN-2928
Commit: f0bc3390f9eacda95104938b17b757c730182426
Parents: 63c6606
Author: Junping Du junping...@apache.org
Authored: Mon Apr 27 11:28:32 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../ApplicationAttemptEntity.java   |  13 +-
 .../timelineservice/ApplicationEntity.java  |  22 +-
 .../records/timelineservice/ClusterEntity.java  |  12 +-
 .../timelineservice/ContainerEntity.java|  13 +-
 .../api/records/timelineservice/FlowEntity.java |  80 +++--
 .../HierarchicalTimelineEntity.java | 124 +++
 .../records/timelineservice/QueueEntity.java|  36 +++
 .../records/timelineservice/TimelineEntity.java | 322 +++
 .../records/timelineservice/TimelineQueue.java  |  35 --
 .../records/timelineservice/TimelineUser.java   |  35 --
 .../api/records/timelineservice/UserEntity.java |  36 +++
 .../TestTimelineServiceRecords.java |  91 --
 .../TestTimelineServiceClientIntegration.java   |  44 ++-
 .../collector/TimelineCollectorWebService.java  |  65 +++-
 15 files changed, 654 insertions(+), 277 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0bc3390/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0a8fc6e..2fe104e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -55,6 +55,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 
 YARN-3390. Reuse TimelineCollectorManager for RM (Zhijie Shen via sjlee)
 
+YARN-3431. Sub resources of timeline entity needs to be passed to a 
separate 
+endpoint. (Zhijie Shen via junping_du)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0bc3390/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationAttemptEntity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationAttemptEntity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationAttemptEntity.java
index 9dc0c1d..734c741 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationAttemptEntity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationAttemptEntity.java
@@ -20,16 +20,17 @@ package org.apache.hadoop.yarn.api.records.timelineservice;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlRootElement;
-
-@XmlRootElement(name = appattempt)
-@XmlAccessorType(XmlAccessType.NONE)
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public class ApplicationAttemptEntity extends HierarchicalTimelineEntity {
   public ApplicationAttemptEntity() {
 super(TimelineEntityType.YARN_APPLICATION_ATTEMPT.toString());
   }
+
+  public ApplicationAttemptEntity(TimelineEntity entity) {
+super(entity);
+if 
(!entity.getType().equals(TimelineEntityType.YARN_APPLICATION_ATTEMPT.toString()))
 {
+  throw new IllegalArgumentException(Incompatible entity type:  + 
getId());
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0bc3390/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationEntity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationEntity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/ApplicationEntity.java
index 45ec520..183d8d8 100644
--- 

[24/50] [abbrv] hadoop git commit: YARN-3551. Consolidate data model change according to the backend implementation (Zhijie Shen via sale)

2015-08-14 Thread vinodkv
YARN-3551. Consolidate data model change according to the backend 
implementation (Zhijie Shen via sale)

(cherry picked from commit 557a3950bddc837469244835f5577899080115d8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f0cea6da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f0cea6da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f0cea6da

Branch: refs/heads/YARN-2928
Commit: f0cea6dab7223724d580f52e1a669babf58c801b
Parents: 1f156af
Author: Sangjin Lee sj...@apache.org
Authored: Mon May 4 16:10:20 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:24 2015 -0700

--
 .../mapred/TimelineServicePerformanceV2.java|   2 +-
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../records/timelineservice/TimelineEntity.java |  16 +--
 .../records/timelineservice/TimelineMetric.java | 131 +--
 .../TestTimelineServiceRecords.java |  81 +---
 .../monitor/ContainersMonitorImpl.java  |   4 +-
 .../TestTimelineServiceClientIntegration.java   |   6 +
 7 files changed, 146 insertions(+), 97 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0cea6da/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TimelineServicePerformanceV2.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TimelineServicePerformanceV2.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TimelineServicePerformanceV2.java
index de46617..1c2e28d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TimelineServicePerformanceV2.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TimelineServicePerformanceV2.java
@@ -261,7 +261,7 @@ public class TimelineServicePerformanceV2 extends 
Configured implements Tool {
   // add a metric
   TimelineMetric metric = new TimelineMetric();
   metric.setId(foo_metric);
-  metric.setSingleData(123456789L);
+  metric.addValue(System.currentTimeMillis(), 123456789L);
   entity.addMetric(metric);
   // add a config
   entity.addConfig(foo, bar);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0cea6da/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 2fe104e..bcb4cc9 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -58,6 +58,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3431. Sub resources of timeline entity needs to be passed to a 
separate 
 endpoint. (Zhijie Shen via junping_du)
 
+YARN-3551. Consolidate data model change according to the backend
+implementation (Zhijie Shen via sjlee)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0cea6da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
index 6cab753..3be7f52 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
@@ -80,7 +80,7 @@ public class TimelineEntity {
   private TimelineEntity real;
   private Identifier identifier;
   private HashMapString, Object info = new HashMap();
-  private HashMapString, Object configs = new HashMap();
+  private HashMapString, String configs = new HashMap();
   private SetTimelineMetric metrics = new HashSet();
   private SetTimelineEvent events = new HashSet();
   private HashMapString, SetString isRelatedToEntities = new HashMap();
@@ -213,7 +213,7 @@ public class TimelineEntity {
   // required by JAXB
   @InterfaceAudience.Private
   @XmlElement(name = configs)
-  public HashMapString, Object getConfigsJAXB() {
+  public HashMapString, String 

[38/50] [abbrv] hadoop git commit: YARN-3706. Generalize native HBase writer for additional tables (Joep Rottinghuis via sjlee)

2015-08-14 Thread vinodkv
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b0369827/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/Separator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/Separator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/Separator.java
new file mode 100644
index 000..ee57890
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/Separator.java
@@ -0,0 +1,303 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.yarn.server.timelineservice.storage.common;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Used to separate row qualifiers, column qualifiers and compount fields.
+ */
+public enum Separator {
+
+  /**
+   * separator in key or column qualifier fields
+   */
+  QUALIFIERS(!, %0$),
+
+  /**
+   * separator in values, and/or compound key/column qualifier fields.
+   */
+  VALUES(?, %1$),
+
+  /**
+   * separator in values, often used to avoid having these in qualifiers and
+   * names. Note that if we use HTML form encoding through URLEncoder, we end 
up
+   * getting a + for a space, which may already occur in strings, so we don't
+   * want that.
+   */
+  SPACE( , %2$);
+
+  /**
+   * The string value of this separator.
+   */
+  private final String value;
+
+  /**
+   * The URLEncoded version of this separator
+   */
+  private final String encodedValue;
+
+  /**
+   * The bye representation of value.
+   */
+  private final byte[] bytes;
+
+  /**
+   * The value quoted so that it can be used as a safe regex
+   */
+  private final String quotedValue;
+
+  private static final byte[] EMPTY_BYTES = new byte[0];
+
+  /**
+   * @param value of the separator to use. Cannot be null or empty string.
+   * @param encodedValue choose something that isn't likely to occur in the 
data
+   *  itself. Cannot be null or empty string.
+   */
+  private Separator(String value, String encodedValue) {
+this.value = value;
+this.encodedValue = encodedValue;
+
+// validation
+if (value == null || value.length() == 0 || encodedValue == null
+|| encodedValue.length() == 0) {
+  throw new IllegalArgumentException(
+  Cannot create separator from null or empty string.);
+}
+
+this.bytes = Bytes.toBytes(value);
+this.quotedValue = Pattern.quote(value);
+  }
+
+  /**
+   * Used to make token safe to be used with this separator without collisions.
+   *
+   * @param token
+   * @return the token with any occurrences of this separator URLEncoded.
+   */
+  public String encode(String token) {
+if (token == null || token.length() == 0) {
+  // Nothing to replace
+  return token;
+}
+return token.replace(value, encodedValue);
+  }
+
+  /**
+   * @param token
+   * @return the token with any occurrences of the encoded separator replaced 
by
+   * the separator itself.
+   */
+  public String decode(String token) {
+if (token == null || token.length() == 0) {
+  // Nothing to replace
+  return token;
+}
+return token.replace(encodedValue, value);
+  }
+
+  /**
+   * Encode the given separators in the token with their encoding equivalent.
+   * This means that when encoding is already present in the token itself, this
+   * is not a reversible process. See also {@link #decode(String, 
Separator...)}
+   *
+   * @param token containing possible separators that need to be encoded.
+   * @param separators to be encoded in the token with their URLEncoding
+   *  equivalent.
+   * @return non-null byte representation of the token with 

[26/50] [abbrv] hadoop git commit: YARN-3134. Implemented Phoenix timeline writer to access HBase backend. Contributed by Li Lu.

2015-08-14 Thread vinodkv
YARN-3134. Implemented Phoenix timeline writer to access HBase backend. 
Contributed by Li Lu.

(cherry picked from commit b3b791be466be79e4e964ad068f7a6ec701e22e1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f029a90d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f029a90d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f029a90d

Branch: refs/heads/YARN-2928
Commit: f029a90da7e54828b9e7d85a838f8a05052d2247
Parents: 7d325ca
Author: Zhijie Shen zjs...@apache.org
Authored: Fri May 8 19:08:02 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:24 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../dev-support/findbugs-exclude.xml|  11 +
 .../hadoop-yarn-server-timelineservice/pom.xml  |  17 +
 .../collector/TimelineCollector.java|  13 +-
 .../collector/TimelineCollectorManager.java |  19 +
 .../storage/PhoenixTimelineWriterImpl.java  | 509 +++
 .../storage/TestPhoenixTimelineWriterImpl.java  | 125 +
 .../storage/TestTimelineWriterImpl.java |  74 +++
 8 files changed, 760 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f029a90d/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 2498952..c1080ad 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -64,6 +64,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3562. unit tests failures and issues found from findbug from earlier
 ATS checkins (Naganarasimha G R via sjlee)
 
+YARN-3134. Implemented Phoenix timeline writer to access HBase backend. (Li
+Lu via zjshen)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f029a90d/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index 114851f..d25d1d9 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -485,6 +485,17 @@
 /Or
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  !-- Ignore SQL_PREPARED_STATEMENT_GENERATED_FROM_NONCONSTANT_STRING 
warnings for Timeline Phoenix storage. --
+  !-- Since we're using dynamic columns, we have to generate SQL statements 
dynamically --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl
 /
+Or
+  Method name=storeEntityVariableLengthFields /
+  Method name=storeEvents /
+  Method name=storeMetrics /
+  Method name=write /
+/Or
+  /Match
   
   !-- Following fields are used in ErrorsAndWarningsBlock, which is not a 
part of analysis of findbugs --
   Match

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f029a90d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
index f974aee..f62230f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
@@ -120,6 +120,23 @@
   artifactIdmockito-all/artifactId
   scopetest/scope
 /dependency
+
+dependency
+groupIdorg.apache.phoenix/groupId
+artifactIdphoenix-core/artifactId
+version4.3.0/version
+exclusions
+  !-- Exclude jline from here --
+  exclusion
+artifactIdjline/artifactId
+groupIdjline/groupId
+  /exclusion
+/exclusions
+/dependency
+dependency
+  groupIdcom.google.guava/groupId
+  artifactIdguava/artifactId
+/dependency
   /dependencies
 
   build

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f029a90d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
 

[10/50] [abbrv] hadoop git commit: YARN-3039. Implemented the app-level timeline aggregator discovery service. Contributed by Junping Du.

2015-08-14 Thread vinodkv
http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbfc0537/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
index cec1d71..dd64629 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
@@ -32,6 +32,7 @@ import java.util.concurrent.Future;
 
 import com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.junit.Test;
 
 public class TestTimelineAggregatorsCollection {
@@ -45,11 +46,11 @@ public class TestTimelineAggregatorsCollection {
 final int NUM_APPS = 5;
 ListCallableBoolean tasks = new ArrayListCallableBoolean();
 for (int i = 0; i  NUM_APPS; i++) {
-  final String appId = String.valueOf(i);
+  final ApplicationId appId = ApplicationId.newInstance(0L, i);
   CallableBoolean task = new CallableBoolean() {
 public Boolean call() {
   AppLevelTimelineAggregator aggregator =
-  new AppLevelTimelineAggregator(appId);
+  new AppLevelTimelineAggregator(appId.toString());
   return (aggregatorCollection.putIfAbsent(appId, aggregator) == 
aggregator);
 }
   };
@@ -79,14 +80,14 @@ public class TestTimelineAggregatorsCollection {
 final int NUM_APPS = 5;
 ListCallableBoolean tasks = new ArrayListCallableBoolean();
 for (int i = 0; i  NUM_APPS; i++) {
-  final String appId = String.valueOf(i);
+  final ApplicationId appId = ApplicationId.newInstance(0L, i);
   CallableBoolean task = new CallableBoolean() {
 public Boolean call() {
   AppLevelTimelineAggregator aggregator =
-  new AppLevelTimelineAggregator(appId);
+  new AppLevelTimelineAggregator(appId.toString());
   boolean successPut =
   (aggregatorCollection.putIfAbsent(appId, aggregator) == 
aggregator);
-  return successPut  aggregatorCollection.remove(appId);
+  return successPut  aggregatorCollection.remove(appId.toString());
 }
   };
   tasks.add(task);



[35/50] [abbrv] hadoop git commit: YARN-3792. Test case failures in TestDistributedShell and some issue fixes related to ATSV2 (Naganarasimha G R via sjlee)

2015-08-14 Thread vinodkv
YARN-3792. Test case failures in TestDistributedShell and some issue fixes 
related to ATSV2 (Naganarasimha G R via sjlee)

(cherry picked from commit 84f37f1c7eefec6d139cbf091c50d6c06f734323)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f1360768
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f1360768
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f1360768

Branch: refs/heads/YARN-2928
Commit: f13607689b6a29ce27358b0fd8c3ef27a09b9a63
Parents: b036982
Author: Sangjin Lee sj...@apache.org
Authored: Mon Jun 22 20:47:56 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:26 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 33 +++
 .../applications/distributedshell/Client.java   |  2 +-
 .../distributedshell/TestDistributedShell.java  | 91 +---
 .../TestDistributedShellWithNodeLabels.java |  9 +-
 .../client/api/impl/TimelineClientImpl.java |  8 ++
 .../application/ApplicationImpl.java|  4 +-
 .../monitor/ContainersMonitorImpl.java  | 15 ++--
 .../RMTimelineCollectorManager.java |  2 +-
 .../collector/NodeTimelineCollectorManager.java | 14 ---
 .../PerNodeTimelineCollectorsAuxService.java|  3 +-
 .../collector/TimelineCollectorManager.java |  2 +-
 11 files changed, 107 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1360768/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a7bae37..5592cbe 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -35,9 +35,6 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-. Rename TimelineAggregator etc. to TimelineCollector. (Sangjin 
Lee
 via junping_du)
 
-YARN-3377. Fixed test failure in TestTimelineServiceClientIntegration.
-(Sangjin Lee via zjshen)
-
 YARN-3034. Implement RM starting its timeline collector. (Naganarasimha G R
 via junping_du)
 
@@ -61,27 +58,15 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3551. Consolidate data model change according to the backend
 implementation (Zhijie Shen via sjlee)
 
-YARN-3562. unit tests failures and issues found from findbug from earlier
-ATS checkins (Naganarasimha G R via sjlee)
-
 YARN-3134. Implemented Phoenix timeline writer to access HBase backend. (Li
 Lu via zjshen)
 
 YARN-3529. Added mini HBase cluster and Phoenix support to timeline service
 v2 unit tests. (Li Lu via zjshen)
 
-YARN-3634. TestMRTimelineEventHandling and TestApplication are broken. (
-Sangjin Lee via junping_du)
-
 YARN-3411. [Storage implementation] explore the native HBase write schema
 for storage (Vrushali C via sjlee)
 
-YARN-3726. Fix TestHBaseTimelineWriterImpl unit test failure by fixing its
-test data (Vrushali C via sjlee)
-
-YARN-3721. build is broken on YARN-2928 branch due to possible dependency
-cycle (Li Lu via sjlee)
-
 YARN-3044. Made RM write app, attempt and optional container lifecycle
 events to timeline service v2. (Naganarasimha G R via zjshen)
 
@@ -100,6 +85,24 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 
   BUG FIXES
 
+YARN-3377. Fixed test failure in TestTimelineServiceClientIntegration.
+(Sangjin Lee via zjshen)
+
+YARN-3562. unit tests failures and issues found from findbug from earlier
+ATS checkins (Naganarasimha G R via sjlee)
+
+YARN-3634. TestMRTimelineEventHandling and TestApplication are broken. (
+Sangjin Lee via junping_du)
+
+YARN-3726. Fix TestHBaseTimelineWriterImpl unit test failure by fixing its
+test data (Vrushali C via sjlee)
+
+YARN-3721. build is broken on YARN-2928 branch due to possible dependency
+cycle (Li Lu via sjlee)
+
+YARN-3792. Test case failures in TestDistributedShell and some issue fixes
+related to ATSV2 (Naganarasimha G R via sjlee)
+
 Trunk - Unreleased
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1360768/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
 

[Hadoop Wiki] Update of Release-2.6.1-Working-Notes by AkiraAjisaka

2015-08-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The Release-2.6.1-Working-Notes page has been changed by AkiraAjisaka:
https://wiki.apache.org/hadoop/Release-2.6.1-Working-Notes?action=diffrev1=14rev2=15

Comment:
Reflected Sangjin's updates.

  Ordered list of commits to cherrypick
  
   ||SHA1||JIRA || Status || New patch on JIRA || Applies cleanly ||
+  ||5f3d967aaefa0b20ef1586b4048b8fa5345d2618 ||HDFS-7278. Add a command that 
allows sysadmins to manually trigger full block reports from || || || ADDED!!! 
minor issues ||
-  ||946463efefec9031cacb21d5a5367acd150ef904 ||HDFS-7213. 
processIncrementalBlockReport performance degradation.   Contributed by 
Eric Payn || ||
+  ||946463efefec9031cacb21d5a5367acd150ef904 ||HDFS-7213. 
processIncrementalBlockReport performance degradation.   Contributed by 
Eric Payn || || || yes ||
-  ||842a54a5f66e76eb79321b66cc3b8820fe66c5cd ||HDFS-7235. 
DataNode#transferBlock should report blocks that don't exist using reportBadBlo 
|| ||
+  ||842a54a5f66e76eb79321b66cc3b8820fe66c5cd ||HDFS-7235. 
DataNode#transferBlock should report blocks that don't exist using reportBadBlo 
|| || || yes ||
-  ||8bfef590295372a48bd447b1462048008810ee17 ||HDFS-7263. Snapshot read can 
reveal future bytes for appended files. Contributed by Tao Lu || ||
+  ||8bfef590295372a48bd447b1462048008810ee17 ||HDFS-7263. Snapshot read can 
reveal future bytes for appended files. Contributed by Tao Lu || || || yes ||
-  ||ec2621e907742aad0264c5f533783f0f18565880 ||HDFS-7035. Make adding a new 
data directory to the DataNode an atomic operation and improv || ||
+  ||ec2621e907742aad0264c5f533783f0f18565880 ||HDFS-7035. Make adding a new 
data directory to the DataNode an atomic operation and improv || || || minor 
issues ||
   ||9e63cb4492896ffb78c84e27f263a61ca12148c8 ||HADOOP-10786. Fix 
UGI#reloginFromKeytab on Java 8. ||Applied locally || ||Yes ||
   ||beb184ac580b0d89351a3f3a7201da34a26db1c1 ||YARN-2856. Fixed RMAppImpl to 
handle ATTEMPT_KILLED event at ACCEPTED state on app recover ||Applied locally 
|| ||Yes ||
   ||ad140d1fc831735fb9335e27b38d2fc040847af1 ||YARN-2816. NM fail to start 
with NPE during container recovery. Contributed by Zhihai Xu( ||Applied 
locally || ||Yes ||
   ||242fd0e39ad1c5d51719cd0f6c197166066e3288 ||YARN-2414. RM web UI: app page 
will crash if app is failed before any attempt has been cre ||Applied locally 
|| ||Yes ||
-  ||2e15754a92c6589308ccbbb646166353cc2f2456 ||HDFS-7225. Remove stale block 
invalidation work when DN re-registers with different UUID. ||
+  ||2e15754a92c6589308ccbbb646166353cc2f2456 ||HDFS-7225. Remove stale block 
invalidation work when DN re-registers with different UUID. || || yes ||
   ||db31ef7e7f55436bbf88c6d93e2273c4463ca9f0 ||YARN-2865. Fixed RM to always 
create a new RMContext when transtions from StandBy to Activ ||Applied locally 
|| ||Yes ||
-  ||8d8eb8dcec94e92d94eedef883cdece8ba333087 ||HDFS-7425. NameNode block 
deletion logging uses incorrect appender. Contributed by Chris N || ||
+  ||8d8eb8dcec94e92d94eedef883cdece8ba333087 ||HDFS-7425. NameNode block 
deletion logging uses incorrect appender. Contributed by Chris N || || || 
remove; already committed ||
-  ||946df98dce18975e37a6a14744ca7a5429f019ce ||HDFS-4882. Prevent the 
Namenode's LeaseManager from looping forever in checkLeases (Ravi P || ||
+  ||946df98dce18975e37a6a14744ca7a5429f019ce ||HDFS-4882. Prevent the 
Namenode's LeaseManager from looping forever in checkLeases (Ravi P || || || 
remove; already committed ||
   ||ae35b0e14d3438237f4b5d3b5d5268d45e549846 ||YARN-2906. 
CapacitySchedulerPage shows HTML tags for a queue's Active Users. Contributed b 
||Applied locally || ||Yes ||
   ||f6d1bf5ed1cf647d82e676df15587de42b1faa42 ||HADOOP-11333. Fix deadlock in 
DomainSocketWatcher when the notification pipe is full (zhao ||Applied locally 
|| ||Yes ||
   ||38ea1419f60d2b8176dba4931748f1f0e52ca84e ||YARN-2905. AggregatedLogsBlock 
page can infinitely loop if the aggregated log file is corr ||Applied locally 
|| ||Yes ||
   ||d21ef79707a0f32939d9a5af4fed2d9f5fe6f2ec ||YARN-2890. MiniYARNCluster 
should start the timeline server based on the configuration. Co ||Applied 
locally || ||Yes ||
-  ||06552a15d5172a2b0ad3d61aa7f9a849857385aa ||HDFS-7446. HDFS inotify should 
have the ability to determine what txid it has read up to ( || ||
+  ||06552a15d5172a2b0ad3d61aa7f9a849857385aa ||HDFS-7446. HDFS inotify should 
have the ability to determine what txid it has read up to ( || || || minor 
issues ||
   ||d6f3d4893d750f19dd8c539fe28eecfab2a54576 ||YARN-2894. Fixed a bug 
regarding application view acl when RM fails over. Contributed by R ||Applied 
locally || || No, minor import issues ||
   ||25be97808b99148412c0efd4d87fc750db4d6607 ||YARN-2874. Dead lock in 
DelegationTokenRenewer which blocks RM to execute any further apps ||Applied 
locally || ||Yes ||
   ||dabdd2d746d1e1194c124c5c7fe73fcc025e78d2 

[09/36] hadoop git commit: HDFS-7263. Snapshot read can reveal future bytes for appended files. Contributed by Tao Luo. Moved CHANGES.txt entry to 2.6.1

2015-08-14 Thread zhz
HDFS-7263. Snapshot read can reveal future bytes for appended files. 
Contributed by Tao Luo.
Moved CHANGES.txt entry to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa264114
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa264114
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa264114

Branch: refs/heads/HDFS-7285-merge
Commit: fa2641143c0d74c4fef122d79f27791e15d3b43f
Parents: f2b4bc9
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:45:43 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:45:43 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa264114/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e4e2896..1507cbe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1819,9 +1819,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7301. TestMissingBlocksAlert should use MXBeans instead of old web UI.
 (Zhe Zhang via wheat9)
 
-HDFS-7263. Snapshot read can reveal future bytes for appended files.
-(Tao Luo via shv)
-
 HDFS-7315. DFSTestUtil.readFileBuffer opens extra FSDataInputStream.
 (Plamen Jeliazkov via wheat9)
 
@@ -2339,6 +2336,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7235. DataNode#transferBlock should report blocks that don't exist
 using reportBadBlock (yzhang via cmccabe)
 
+HDFS-7263. Snapshot read can reveal future bytes for appended files.
+(Tao Luo via shv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[01/36] hadoop git commit: HADOOP-12244. recover broken rebase during precommit (aw)

2015-08-14 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285-merge 8c7ca4f45 - acbe42a85 (forced update)


HADOOP-12244. recover broken rebase during precommit (aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b73181f1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b73181f1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b73181f1

Branch: refs/heads/HDFS-7285-merge
Commit: b73181f18702f9dc2dfc9d3cdb415b510261e74c
Parents: 53bef9c
Author: Allen Wittenauer a...@apache.org
Authored: Thu Aug 13 12:29:19 2015 -0700
Committer: Allen Wittenauer a...@apache.org
Committed: Thu Aug 13 12:29:19 2015 -0700

--
 dev-support/test-patch.sh   | 6 ++
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 ++
 2 files changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b73181f1/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index efcd614..a3cdc85 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -947,6 +947,12 @@ function git_checkout
 # we need to explicitly fetch in case the
 # git ref hasn't been brought in tree yet
 if [[ ${OFFLINE} == false ]]; then
+
+  if [[ -f .git/rebase-apply ]]; then
+hadoop_error ERROR: previous rebase failed. Aborting it.
+${GIT} rebase --abort
+  fi
+
   ${GIT} pull --rebase
   if [[ $? != 0 ]]; then
 hadoop_error ERROR: git pull is failing

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b73181f1/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index c80be05..5d8d20d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -504,6 +504,8 @@ Trunk (Unreleased)
 HADOOP-12009. Clarify FileSystem.listStatus() sorting order  fix
 FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman)
 
+HADOOP-12244. recover broken rebase during precommit (aw)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)



[03/36] hadoop git commit: YARN-4005. Completed container whose app is finished is possibly not removed from NMStateStore. Contributed by Jun Gong

2015-08-14 Thread zhz
YARN-4005. Completed container whose app is finished is possibly not removed 
from NMStateStore. Contributed by Jun Gong


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/38aed1a9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/38aed1a9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/38aed1a9

Branch: refs/heads/HDFS-7285-merge
Commit: 38aed1a94ed7b6da62e2445b5610bc02b1cddeeb
Parents: ae57d60
Author: Jian He jia...@apache.org
Authored: Thu Aug 13 14:46:08 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Thu Aug 13 14:46:08 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../nodemanager/NodeStatusUpdaterImpl.java  |  8 ++---
 .../nodemanager/TestNodeStatusUpdater.java  | 34 
 3 files changed, 41 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/38aed1a9/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 9745d9d..3d19734 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -763,6 +763,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3992. TestApplicationPriority.testApplicationPriorityAllocation fails 
 intermittently. (Contributed by Sunil G)
 
+YARN-4005. Completed container whose app is finished is possibly not
+removed from NMStateStore. (Jun Gong via jianhe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/38aed1a9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
index 30a2bd5..7c5c28b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
@@ -474,12 +474,12 @@ public class NodeStatusUpdaterImpl extends 
AbstractService implements
 } else {
   if (!isContainerRecentlyStopped(containerId)) {
 pendingCompletedContainers.put(containerId, containerStatus);
-// Adding to finished containers cache. Cache will keep it around 
at
-// least for #durationToTrackStoppedContainers duration. In the
-// subsequent call to stop container it will get removed from 
cache.
-addCompletedContainer(containerId);
   }
 }
+// Adding to finished containers cache. Cache will keep it around at
+// least for #durationToTrackStoppedContainers duration. In the
+// subsequent call to stop container it will get removed from cache.
+addCompletedContainer(containerId);
   } else {
 containerStatuses.add(containerStatus);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/38aed1a9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
index bc48adf..a9ef72f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java
@@ -994,6 +994,40 @@ public class TestNodeStatusUpdater {
 Assert.assertTrue(containerIdSet.contains(runningContainerId));
   }
 
+  @Test(timeout = 1)
+  public void testCompletedContainersIsRecentlyStopped() throws Exception {
+NodeManager nm = new NodeManager();
+nm.init(conf);
+

[31/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
new file mode 100644
index 000..03683b0
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
@@ -0,0 +1,561 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder.util;
+
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Implementation of Galois field arithmetic with 2^p elements. The input must
+ * be unsigned integers. It's ported from HDFS-RAID, slightly adapted.
+ */
+public class GaloisField {
+
+  // Field size 256 is good for byte based system
+  private static final int DEFAULT_FIELD_SIZE = 256;
+  // primitive polynomial 1 + X^2 + X^3 + X^4 + X^8 (substitute 2)
+  private static final int DEFAULT_PRIMITIVE_POLYNOMIAL = 285;
+  static private final MapInteger, GaloisField instances =
+  new HashMapInteger, GaloisField();
+  private final int[] logTable;
+  private final int[] powTable;
+  private final int[][] mulTable;
+  private final int[][] divTable;
+  private final int fieldSize;
+  private final int primitivePeriod;
+  private final int primitivePolynomial;
+
+  private GaloisField(int fieldSize, int primitivePolynomial) {
+assert fieldSize  0;
+assert primitivePolynomial  0;
+
+this.fieldSize = fieldSize;
+this.primitivePeriod = fieldSize - 1;
+this.primitivePolynomial = primitivePolynomial;
+logTable = new int[fieldSize];
+powTable = new int[fieldSize];
+mulTable = new int[fieldSize][fieldSize];
+divTable = new int[fieldSize][fieldSize];
+int value = 1;
+for (int pow = 0; pow  fieldSize - 1; pow++) {
+  powTable[pow] = value;
+  logTable[value] = pow;
+  value = value * 2;
+  if (value = fieldSize) {
+value = value ^ primitivePolynomial;
+  }
+}
+// building multiplication table
+for (int i = 0; i  fieldSize; i++) {
+  for (int j = 0; j  fieldSize; j++) {
+if (i == 0 || j == 0) {
+  mulTable[i][j] = 0;
+  continue;
+}
+int z = logTable[i] + logTable[j];
+z = z = primitivePeriod ? z - primitivePeriod : z;
+z = powTable[z];
+mulTable[i][j] = z;
+  }
+}
+// building division table
+for (int i = 0; i  fieldSize; i++) {
+  for (int j = 1; j  fieldSize; j++) {
+if (i == 0) {
+  divTable[i][j] = 0;
+  continue;
+}
+int z = logTable[i] - logTable[j];
+z = z  0 ? z + primitivePeriod : z;
+z = powTable[z];
+divTable[i][j] = z;
+  }
+}
+  }
+
+  /**
+   * Get the object performs Galois field arithmetics
+   *
+   * @param fieldSize   size of the field
+   * @param primitivePolynomial a primitive polynomial corresponds to the size
+   */
+  public static GaloisField getInstance(int fieldSize,
+int primitivePolynomial) {
+int key = ((fieldSize  16)  0x)
++ (primitivePolynomial  0x);
+GaloisField gf;
+synchronized (instances) {
+  gf = instances.get(key);
+  if (gf == null) {
+gf = new GaloisField(fieldSize, primitivePolynomial);
+instances.put(key, gf);
+  }
+}
+return gf;
+  }
+
+  /**
+   * Get the object performs Galois field arithmetic with default setting
+   */
+  public static GaloisField getInstance() {
+return getInstance(DEFAULT_FIELD_SIZE, DEFAULT_PRIMITIVE_POLYNOMIAL);
+  }
+
+  /**
+   * Return number of elements in the field
+   *
+   * @return number of elements in the field
+   */
+  public int getFieldSize() {
+return fieldSize;
+  }
+
+  /**
+   * Return the primitive polynomial in GF(2)
+   *
+   * @return 

[05/36] hadoop git commit: YARN-3987. Send AM container completed msg to NM once AM finishes. Contributed by sandflee

2015-08-14 Thread zhz
YARN-3987. Send AM container completed msg to NM once AM finishes. Contributed 
by sandflee


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0a030546
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0a030546
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0a030546

Branch: refs/heads/HDFS-7285-merge
Commit: 0a030546e24c55662a603bb63c9029ad0ccf43fc
Parents: 7a445fc
Author: Jian He jia...@apache.org
Authored: Thu Aug 13 16:20:36 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Thu Aug 13 16:22:53 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt   |  3 +++
 .../rmapp/attempt/RMAppAttemptImpl.java   | 14 ++
 2 files changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a030546/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a4c16b1..c451320 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -769,6 +769,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4047. ClientRMService getApplications has high scheduler lock 
contention.
 (Jason Lowe via jianhe)
 
+YARN-3987. Send AM container completed msg to NM once AM finishes.
+(sandflee via jianhe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a030546/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
index 0914022..80f5eb0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
@@ -1658,6 +1658,16 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
 }
   }
 
+  // Ack NM to remove finished AM container, not waiting for
+  // new appattempt to pull am container complete msg, new  appattempt
+  // may launch fail and leaves too many completed container in NM
+  private void sendFinishedAMContainerToNM(NodeId nodeId,
+  ContainerId containerId) {
+ListContainerId containerIdList = new ArrayListContainerId();
+containerIdList.add(containerId);
+eventHandler.handle(new RMNodeFinishedContainersPulledByAMEvent(
+nodeId, containerIdList));
+  }
 
   // Ack NM to remove finished containers from context.
   private void sendFinishedContainersToNM() {
@@ -1686,9 +1696,13 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
   new ArrayListContainerStatus());
 appAttempt.finishedContainersSentToAM.get(nodeId).add(
   containerFinishedEvent.getContainerStatus());
+
 if (!appAttempt.getSubmissionContext()
   .getKeepContainersAcrossApplicationAttempts()) {
   appAttempt.sendFinishedContainersToNM();
+} else {
+  appAttempt.sendFinishedAMContainerToNM(nodeId,
+  containerFinishedEvent.getContainerStatus().getContainerId());
 }
   }
 



[17/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
new file mode 100644
index 000..52626e1
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
@@ -0,0 +1,433 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSStripedOutputStream;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
+import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.server.datanode.ReplicaBeingWritten;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
+import org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo;
+import 
org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo.BlockStatus;
+import org.apache.hadoop.hdfs.server.protocol.StorageBlockReport;
+import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks;
+import org.apache.hadoop.io.IOUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_DEFAULT;
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
+import static org.apache.hadoop.hdfs.protocol.HdfsConstants.NUM_DATA_BLOCKS;
+import static org.junit.Assert.assertEquals;
+
+public class TestAddStripedBlocks {
+  private final short GROUP_SIZE = HdfsConstants.NUM_DATA_BLOCKS +
+  HdfsConstants.NUM_PARITY_BLOCKS;
+
+  private MiniDFSCluster cluster;
+  private DistributedFileSystem dfs;
+
+  @Before
+  public void setup() throws IOException {
+cluster = new MiniDFSCluster.Builder(new HdfsConfiguration())
+.numDataNodes(GROUP_SIZE).build();
+cluster.waitActive();
+dfs = cluster.getFileSystem();
+dfs.getClient().createErasureCodingZone(/, null, 0);
+  }
+
+  @After
+  public void tearDown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  /**
+   * Make sure the IDs of striped blocks do not conflict
+   */
+  @Test
+  public void testAllocateBlockId() throws Exception {
+Path testPath = new Path(/testfile);
+// create a file while allocates a new block
+DFSTestUtil.writeFile(dfs, testPath, hello, world!);
+LocatedBlocks lb = dfs.getClient().getLocatedBlocks(testPath.toString(), 
0);
+final long firstId = lb.get(0).getBlock().getBlockId();
+// delete the file
+dfs.delete(testPath, true);
+
+// allocate 

[35/36] hadoop git commit: HDFS-8854. Erasure coding: add ECPolicy to replace schema+cellSize in hadoop-hdfs. Contributed by Walter Su.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/acbe42a8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index a08bd2f..2016908 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -91,7 +91,7 @@ import 
org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo;
 import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks;
 import org.apache.hadoop.hdfs.util.LightWeightLinkedSet;
 import org.apache.hadoop.metrics2.util.MBeans;
-import org.apache.hadoop.io.erasurecode.ECSchema;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 
 import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
 import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.getInternalBlockLength;
@@ -966,14 +966,13 @@ public class BlockManager implements BlockStatsMXBean {
   ErasureCodingZone ecZone)
   throws IOException {
 assert namesystem.hasReadLock();
-final ECSchema schema = ecZone != null ? ecZone.getSchema() : null;
-final int cellSize = ecZone != null ? ecZone.getCellSize() : 0;
+final ErasureCodingPolicy ecPolicy = ecZone != null ? ecZone
+.getErasureCodingPolicy() : null;
 if (blocks == null) {
   return null;
 } else if (blocks.length == 0) {
   return new LocatedBlocks(0, isFileUnderConstruction,
-  Collections.LocatedBlock emptyList(), null, false, feInfo, schema,
-  cellSize);
+  Collections.LocatedBlock emptyList(), null, false, feInfo, 
ecPolicy);
 } else {
   if (LOG.isDebugEnabled()) {
 LOG.debug(blocks =  + java.util.Arrays.asList(blocks));
@@ -998,7 +997,7 @@ public class BlockManager implements BlockStatsMXBean {
   }
   return new LocatedBlocks(fileSizeExcludeBlocksUnderConstruction,
   isFileUnderConstruction, locatedblocks, lastlb, isComplete, feInfo,
-  schema, cellSize);
+  ecPolicy);
 }
   }
 
@@ -1618,7 +1617,7 @@ public class BlockManager implements BlockStatsMXBean {
   .warn(Failed to get the EC zone for the file {} , src);
 }
 if (ecZone == null) {
-  blockLog.warn(No EC schema found for the file {}. 
+  blockLog.warn(No erasure coding policy found for the file {}. 
   + So cannot proceed for recovery, src);
   // TODO: we may have to revisit later for what we can do better 
to
   // handle this case.
@@ -1628,7 +1627,7 @@ public class BlockManager implements BlockStatsMXBean {
 new ExtendedBlock(namesystem.getBlockPoolId(), block),
 rw.srcNodes, rw.targets,
 ((ErasureCodingWork) rw).liveBlockIndicies,
-ecZone.getSchema(), ecZone.getCellSize());
+ecZone.getErasureCodingPolicy());
   } else {
 rw.srcNodes[0].addBlockToBeReplicated(block, targets);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/acbe42a8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 3cf9db6..21f9f39 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -50,7 +50,7 @@ import org.apache.hadoop.hdfs.server.protocol.StorageReport;
 import org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary;
 import org.apache.hadoop.hdfs.util.EnumCounters;
 import org.apache.hadoop.hdfs.util.LightWeightHashSet;
-import org.apache.hadoop.io.erasurecode.ECSchema;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 import org.apache.hadoop.util.IntrusiveCollection;
 import org.apache.hadoop.util.Time;
 
@@ -611,10 +611,10 @@ public class DatanodeDescriptor extends DatanodeInfo {
*/
   void addBlockToBeErasureCoded(ExtendedBlock block,
   DatanodeDescriptor[] sources, DatanodeStorageInfo[] targets,
-  short[] liveBlockIndices, ECSchema ecSchema, int cellSize) {
+  short[] 

[36/36] hadoop git commit: HDFS-8854. Erasure coding: add ECPolicy to replace schema+cellSize in hadoop-hdfs. Contributed by Walter Su.

2015-08-14 Thread zhz
HDFS-8854. Erasure coding: add ECPolicy to replace schema+cellSize in 
hadoop-hdfs. Contributed by Walter Su.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/acbe42a8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/acbe42a8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/acbe42a8

Branch: refs/heads/HDFS-7285-merge
Commit: acbe42a85216cea5c9e3a1135dc5318a27329bde
Parents: ecf3634
Author: Zhe Zhang zhezh...@cloudera.com
Authored: Wed Aug 12 11:21:43 2015 -0700
Committer: Zhe Zhang zhezh...@cloudera.com
Committed: Fri Aug 14 10:54:43 2015 -0700

--
 .../apache/hadoop/io/erasurecode/ECSchema.java  |  42 +
 .../hadoop/io/erasurecode/SchemaLoader.java | 152 ---
 .../hadoop/io/erasurecode/TestECSchema.java |   6 +-
 .../hadoop/io/erasurecode/TestSchemaLoader.java |  74 -
 .../hdfs/client/HdfsClientConfigKeys.java   |   4 +-
 .../hadoop/hdfs/protocol/ClientProtocol.java|  10 +-
 .../hdfs/protocol/ErasureCodingPolicy.java  |  93 
 .../hadoop/hdfs/protocol/ErasureCodingZone.java |  26 +---
 .../hadoop/hdfs/protocol/HdfsConstants.java |   4 +-
 .../hadoop/hdfs/protocol/HdfsFileStatus.java|  17 +--
 .../hadoop/hdfs/protocol/LocatedBlocks.java |  25 +--
 .../protocol/SnapshottableDirectoryStatus.java  |   2 +-
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  |   4 +-
 .../src/main/proto/ClientNamenodeProtocol.proto |   4 +-
 .../src/main/proto/erasurecoding.proto  |  17 +--
 .../src/main/proto/hdfs.proto   |  21 +--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  20 +--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   6 -
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |   2 +-
 .../hadoop/hdfs/DFSStripedInputStream.java  |  20 +--
 .../hadoop/hdfs/DFSStripedOutputStream.java |  10 +-
 .../hadoop/hdfs/DistributedFileSystem.java  |  13 +-
 .../apache/hadoop/hdfs/client/HdfsAdmin.java|  23 ++-
 .../hdfs/protocol/HdfsLocatedFileStatus.java|   5 +-
 ...tNamenodeProtocolServerSideTranslatorPB.java |  25 ++-
 .../ClientNamenodeProtocolTranslatorPB.java |  36 +++--
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  59 ---
 .../blockmanagement/BlockInfoStriped.java   |  37 ++---
 .../BlockInfoStripedUnderConstruction.java  |  13 +-
 .../server/blockmanagement/BlockManager.java|  15 +-
 .../blockmanagement/DatanodeDescriptor.java |   6 +-
 .../hdfs/server/datanode/StorageLocation.java   |   2 +-
 .../erasurecode/ErasureCodingWorker.java|  11 +-
 .../apache/hadoop/hdfs/server/mover/Mover.java  |  14 +-
 .../namenode/ErasureCodingPolicyManager.java| 115 ++
 .../namenode/ErasureCodingSchemaManager.java| 127 
 .../namenode/ErasureCodingZoneManager.java  |  45 +++---
 .../server/namenode/FSDirErasureCodingOp.java   |  47 ++
 .../server/namenode/FSDirStatAndListingOp.java  |  18 +--
 .../hdfs/server/namenode/FSDirWriteFileOp.java  |  11 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |   8 +-
 .../server/namenode/FSImageFormatPBINode.java   |  23 +--
 .../hdfs/server/namenode/FSNamesystem.java  |  52 +++
 .../hdfs/server/namenode/NameNodeRpcServer.java |  11 +-
 .../hdfs/server/namenode/NamenodeFsck.java  |  10 +-
 .../server/protocol/BlockECRecoveryCommand.java |  23 +--
 .../hdfs/tools/erasurecode/ECCommand.java   |  80 +-
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  56 +++
 .../hadoop-hdfs/src/main/proto/fsimage.proto|   1 -
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |   2 +-
 .../hadoop/hdfs/TestDFSClientRetries.java   |   6 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  16 +-
 .../hadoop/hdfs/TestDFSStripedOutputStream.java |   2 +-
 .../TestDFSStripedOutputStreamWithFailure.java  |   2 +-
 .../org/apache/hadoop/hdfs/TestDFSUtil.java |   2 +-
 .../apache/hadoop/hdfs/TestDatanodeConfig.java  |   4 +-
 .../org/apache/hadoop/hdfs/TestECSchemas.java   |  54 ---
 .../apache/hadoop/hdfs/TestEncryptionZones.java |   2 +-
 .../hadoop/hdfs/TestErasureCodingZones.java |  58 +++
 .../hadoop/hdfs/TestFileStatusWithECPolicy.java |  65 
 .../hadoop/hdfs/TestFileStatusWithECschema.java |  65 
 .../java/org/apache/hadoop/hdfs/TestLease.java  |   4 +-
 .../hdfs/TestReadStripedFileWithDecoding.java   |   3 +-
 .../TestReadStripedFileWithMissingBlocks.java   |   3 +-
 .../hadoop/hdfs/TestRecoverStripedFile.java |   2 +-
 .../hdfs/TestSafeModeWithStripedFile.java   |   3 +-
 .../hadoop/hdfs/TestWriteReadStripedFile.java   |   3 +-
 .../hdfs/TestWriteStripedFileWithFailure.java   |   5 +-
 .../hadoop/hdfs/protocolPB/TestPBHelper.java|  34 ++---
 .../hdfs/server/balancer/TestBalancer.java  |   2 +-
 

[20/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
new file mode 100644
index 000..baf6106
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
@@ -0,0 +1,335 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils;
+import org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset;
+import org.apache.hadoop.hdfs.server.namenode.ErasureCodingSchemaManager;
+import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.apache.hadoop.io.erasurecode.CodecUtil;
+import org.apache.hadoop.io.erasurecode.ECSchema;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
+
+public class TestDFSStripedInputStream {
+
+  public static final Log LOG = 
LogFactory.getLog(TestDFSStripedInputStream.class);
+
+  private MiniDFSCluster cluster;
+  private Configuration conf = new Configuration();
+  private DistributedFileSystem fs;
+  private final Path dirPath = new Path(/striped);
+  private Path filePath = new Path(dirPath, file);
+  private final ECSchema schema = 
ErasureCodingSchemaManager.getSystemDefaultSchema();
+  private final short DATA_BLK_NUM = HdfsConstants.NUM_DATA_BLOCKS;
+  private final short PARITY_BLK_NUM = HdfsConstants.NUM_PARITY_BLOCKS;
+  private final int CELLSIZE = HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
+  private final int NUM_STRIPE_PER_BLOCK = 2;
+  private final int INTERNAL_BLOCK_SIZE = NUM_STRIPE_PER_BLOCK * CELLSIZE;
+  private final int BLOCK_GROUP_SIZE =  DATA_BLK_NUM * INTERNAL_BLOCK_SIZE;
+
+  @Before
+  public void setup() throws IOException {
+conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, INTERNAL_BLOCK_SIZE);
+SimulatedFSDataset.setFactory(conf);
+cluster = new MiniDFSCluster.Builder(conf).numDataNodes(
+DATA_BLK_NUM + PARITY_BLK_NUM).build();
+cluster.waitActive();
+for (DataNode dn : cluster.getDataNodes()) {
+  DataNodeTestUtils.setHeartbeatsDisabledForTests(dn, true);
+}
+fs = cluster.getFileSystem();
+fs.mkdirs(dirPath);
+fs.getClient().createErasureCodingZone(dirPath.toString(), null, CELLSIZE);
+  }
+
+  @After
+  public void tearDown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  /**
+   * Test {@link DFSStripedInputStream#getBlockAt(long)}
+   */
+  @Test
+  public void testRefreshBlock() throws Exception {
+final int numBlocks = 4;
+DFSTestUtil.createStripedFile(cluster, filePath, null, numBlocks,
+NUM_STRIPE_PER_BLOCK, false);
+LocatedBlocks lbs = fs.getClient().namenode.getBlockLocations(
+filePath.toString(), 0, BLOCK_GROUP_SIZE * numBlocks);
+final DFSStripedInputStream in = new 

[07/36] hadoop git commit: HDFS-7213. processIncrementalBlockReport performance degradation. Contributed by Eric Payne. Moved CHANGES.TXT entry to 2.6.1

2015-08-14 Thread zhz
HDFS-7213. processIncrementalBlockReport performance degradation. Contributed 
by Eric Payne.
Moved CHANGES.TXT entry to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d25cb8fe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d25cb8fe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d25cb8fe

Branch: refs/heads/HDFS-7285-merge
Commit: d25cb8fe12d00faf3e8f3bfd23fd1b01981a340f
Parents: 6b1cefc
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 11:23:51 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 11:23:51 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d25cb8fe/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ce9a3f1..1f72264 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1265,9 +1265,6 @@ Release 2.7.1 - 2015-07-06
 HDFS-8451. DFSClient probe for encryption testing interprets empty URI
 property for enabled. (Steve Loughran via xyao)
 
-HDFS-8486. DN startup may cause severe data loss (Daryn Sharp via Colin P.
-McCabe)
-
 HDFS-8270. create() always retried with hardcoded timeout when file already
 exists with open lease (J.Andreina via vinayakumarb)
 
@@ -1407,9 +1404,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-5928. Show namespace and namenode ID on NN dfshealth page.
 (Siqi Li via wheat9)
 
-HDFS-7213. processIncrementalBlockReport performance degradation.
-(Eric Payne via kihwal)
-
 HDFS-7280. Use netty 4 in WebImageViewer. (wheat9)
 
 HDFS-3342. SocketTimeoutException in BlockSender.sendChunks could
@@ -2339,6 +2333,12 @@ Release 2.6.1 - UNRELEASED
 HDFS-7733. NFS: readdir/readdirplus return null directory
 attribute on failure. (Arpit Agarwal)
 
+HDFS-8486. DN startup may cause severe data loss (Daryn Sharp via Colin P.
+McCabe)
+
+HDFS-7213. processIncrementalBlockReport performance degradation.
+(Eric Payne via kihwal)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[04/36] hadoop git commit: YARN-4047. ClientRMService getApplications has high scheduler lock contention. Contributed by Jason Lowe

2015-08-14 Thread zhz
YARN-4047. ClientRMService getApplications has high scheduler lock contention. 
Contributed by Jason Lowe


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7a445fcf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7a445fcf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7a445fcf

Branch: refs/heads/HDFS-7285-merge
Commit: 7a445fcfabcf9c6aae219051f65d3f6cb8feb87c
Parents: 38aed1a
Author: Jian He jia...@apache.org
Authored: Thu Aug 13 16:02:57 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Thu Aug 13 16:02:57 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt  |  3 +++
 .../yarn/server/resourcemanager/ClientRMService.java | 11 +++
 2 files changed, 10 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7a445fcf/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 3d19734..a4c16b1 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -766,6 +766,9 @@ Release 2.8.0 - UNRELEASED
 YARN-4005. Completed container whose app is finished is possibly not
 removed from NMStateStore. (Jun Gong via jianhe)
 
+YARN-4047. ClientRMService getApplications has high scheduler lock 
contention.
+(Jason Lowe via jianhe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7a445fcf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index e4199be..2dcfe9a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -752,13 +752,9 @@ public class ClientRMService extends AbstractService 
implements
   RMApp application = appsIter.next();
 
   // Check if current application falls under the specified scope
-  boolean allowAccess = checkAccess(callerUGI, application.getUser(),
-  ApplicationAccessType.VIEW_APP, application);
   if (scope == ApplicationsRequestScope.OWN 
   !callerUGI.getUserName().equals(application.getUser())) {
 continue;
-  } else if (scope == ApplicationsRequestScope.VIEWABLE  !allowAccess) {
-continue;
   }
 
   if (applicationTypes != null  !applicationTypes.isEmpty()) {
@@ -807,6 +803,13 @@ public class ClientRMService extends AbstractService 
implements
 }
   }
 
+  // checkAccess can grab the scheduler lock so call it last
+  boolean allowAccess = checkAccess(callerUGI, application.getUser(),
+  ApplicationAccessType.VIEW_APP, application);
+  if (scope == ApplicationsRequestScope.VIEWABLE  !allowAccess) {
+continue;
+  }
+
   reports.add(application.createAndGetApplicationReport(
   callerUGI.getUserName(), allowAccess));
 }



[15/36] hadoop git commit: HDFS-8270. create() always retried with hardcoded timeout when file already exists with open lease (Contributed by J.Andreina) Moved to 2.6.1

2015-08-14 Thread zhz
HDFS-8270. create() always retried with hardcoded timeout when file already 
exists with open lease (Contributed by J.Andreina)
Moved to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/84bf7129
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/84bf7129
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/84bf7129

Branch: refs/heads/HDFS-7285-merge
Commit: 84bf71295a5e52b2a7bb69440a885a25bc75f544
Parents: fc508b4
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 16:13:30 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 16:13:30 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/84bf7129/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index dba4535..0b28709 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1265,9 +1265,6 @@ Release 2.7.1 - 2015-07-06
 HDFS-8451. DFSClient probe for encryption testing interprets empty URI
 property for enabled. (Steve Loughran via xyao)
 
-HDFS-8270. create() always retried with hardcoded timeout when file already
-exists with open lease (J.Andreina via vinayakumarb)
-
 HDFS-8523. Remove usage information on unsupported operation
 fsck -showprogress from branch-2 (J.Andreina via vinayakumarb)
 
@@ -2339,6 +2336,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7225. Remove stale block invalidation work when DN re-registers with
 different UUID. (Zhe Zhang and Andrew Wang)
 
+HDFS-8270. create() always retried with hardcoded timeout when file already
+exists with open lease (J.Andreina via vinayakumarb)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[25/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
new file mode 100644
index 000..622b258
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
@@ -0,0 +1,54 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.net.NetworkTopology;
+import org.apache.hadoop.util.ReflectionUtils;
+
+public class BlockPlacementPolicies{
+
+  private final BlockPlacementPolicy replicationPolicy;
+  private final BlockPlacementPolicy ecPolicy;
+
+  public BlockPlacementPolicies(Configuration conf, FSClusterStats stats,
+NetworkTopology clusterMap,
+Host2NodesMap host2datanodeMap){
+final Class? extends BlockPlacementPolicy replicatorClass = conf
+.getClass(DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_KEY,
+DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_DEFAULT,
+BlockPlacementPolicy.class);
+replicationPolicy = ReflectionUtils.newInstance(replicatorClass, conf);
+replicationPolicy.initialize(conf, stats, clusterMap, host2datanodeMap);
+final Class? extends BlockPlacementPolicy blockPlacementECClass =
+conf.getClass(DFSConfigKeys.DFS_BLOCK_PLACEMENT_EC_CLASSNAME_KEY,
+DFSConfigKeys.DFS_BLOCK_PLACEMENT_EC_CLASSNAME_DEFAULT,
+BlockPlacementPolicy.class);
+ecPolicy = ReflectionUtils.newInstance(blockPlacementECClass, conf);
+ecPolicy.initialize(conf, stats, clusterMap, host2datanodeMap);
+  }
+
+  public BlockPlacementPolicy getPolicy(boolean isStriped){
+if (isStriped) {
+  return ecPolicy;
+} else {
+  return replicationPolicy;
+}
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
index 9696179..86aaf79 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
@@ -145,31 +145,7 @@ public abstract class BlockPlacementPolicy {
   abstract protected void initialize(Configuration conf,  FSClusterStats 
stats, 
  NetworkTopology clusterMap, 
  Host2NodesMap host2datanodeMap);
-
-  /**
-   * Get an instance of the configured Block Placement Policy based on the
-   * the configuration property
-   * {@link  DFSConfigKeys#DFS_BLOCK_REPLICATOR_CLASSNAME_KEY}.
-   * 
-   * @param conf the configuration to be used
-   * @param stats an object that is used to retrieve the load on the cluster
-   * @param clusterMap the network topology of the cluster
-   * @return an instance of BlockPlacementPolicy
-   */
-  public static BlockPlacementPolicy getInstance(Configuration conf, 
- FSClusterStats stats,
- NetworkTopology clusterMap,
- Host2NodesMap 
host2datanodeMap) {
-final Class? extends BlockPlacementPolicy replicatorClass = 

[21/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
new file mode 100644
index 000..4dc94a0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
@@ -0,0 +1,947 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.util;
+
+import com.google.common.annotations.VisibleForTesting;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DFSStripedOutputStream;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
+import org.apache.hadoop.io.erasurecode.ECSchema;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder;
+import org.apache.hadoop.security.token.Token;
+
+import java.nio.ByteBuffer;
+import java.util.*;
+import java.io.IOException;
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.CompletionService;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * When accessing a file in striped layout, operations on logical byte ranges
+ * in the file need to be mapped to physical byte ranges on block files stored
+ * on DataNodes. This utility class facilities this mapping by defining and
+ * exposing a number of striping-related concepts. The most basic ones are
+ * illustrated in the following diagram. Unless otherwise specified, all
+ * range-related calculations are inclusive (the end offset of the previous
+ * range should be 1 byte lower than the start offset of the next one).
+ *
+ *  |   Block Group  |   - Block Group: logical unit composing
+ *  |  |striped HDFS files.
+ *  blk_0  blk_1   blk_2   - Internal Blocks: each internal block
+ *|  |   |  represents a physically stored local
+ *v  v   v  block file
+ * +--+   +--+   +--+
+ * |cell_0|   |cell_1|   |cell_2|  - {@link StripingCell} represents the
+ * +--+   +--+   +--+   logical order that a Block Group should
+ * |cell_3|   |cell_4|   |cell_5|   be accessed: cell_0, cell_1, ...
+ * +--+   +--+   +--+
+ * |cell_6|   |cell_7|   |cell_8|
+ * +--+   +--+   +--+
+ * |cell_9|
+ * +--+  - A cell contains cellSize bytes of data
+ */
+@InterfaceAudience.Private
+public class StripedBlockUtil {
+
+  /**
+   * This method parses a striped block group into individual blocks.
+   *
+   * @param bg The striped block group
+   * @param cellSize The size of a striping cell
+   * @param dataBlkNum The number of data blocks
+   * @return An array containing the blocks in the group
+   */
+  public static LocatedBlock[] parseStripedBlockGroup(LocatedStripedBlock bg,
+  int cellSize, int dataBlkNum, int parityBlkNum) {
+int locatedBGSize = bg.getBlockIndices().length;
+LocatedBlock[] lbs = new LocatedBlock[dataBlkNum + parityBlkNum];
+for (short i = 0; i  locatedBGSize; i++) {
+  final int idx = bg.getBlockIndices()[i];
+  // for now we do not use redundant replica of an internal block
+  if (idx  (dataBlkNum + parityBlkNum)  lbs[idx] == null) {
+lbs[idx] = constructInternalBlock(bg, i, cellSize,
+dataBlkNum, idx);
+  }
+}
+return lbs;
+  }
+
+  /**
+   * This method creates an internal block at the given index of a block group
+   *
+   * @param idxInReturnedLocs The index in 

[29/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
new file mode 100644
index 000..3612063
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -0,0 +1,939 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.ChecksumException;
+import org.apache.hadoop.fs.ReadOption;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
+import 
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockIdManager;
+import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.apache.hadoop.io.ByteBufferPool;
+
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.AlignedStripe;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.StripingChunk;
+import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.StripingChunkReadResult;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.erasurecode.CodecUtil;
+import org.apache.hadoop.io.erasurecode.ECSchema;
+
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder;
+import org.apache.hadoop.util.DirectBufferPool;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.Set;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.concurrent.CompletionService;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorCompletionService;
+import java.util.concurrent.Callable;
+import java.util.concurrent.Future;
+
+/**
+ * DFSStripedInputStream reads from striped block groups
+ */
+public class DFSStripedInputStream extends DFSInputStream {
+
+  private static class ReaderRetryPolicy {
+private int fetchEncryptionKeyTimes = 1;
+private int fetchTokenTimes = 1;
+
+void refetchEncryptionKey() {
+  fetchEncryptionKeyTimes--;
+}
+
+void refetchToken() {
+  fetchTokenTimes--;
+}
+
+boolean shouldRefetchEncryptionKey() {
+  return fetchEncryptionKeyTimes  0;
+}
+
+boolean shouldRefetchToken() {
+  return fetchTokenTimes  0;
+}
+  }
+
+  /** Used to indicate the buffered data's range in the block group */
+  private static class StripeRange {
+/** start offset in the block group (inclusive) */
+final long offsetInBlock;
+/** length of the stripe range */
+final long length;
+
+StripeRange(long offsetInBlock, long length) {
+  Preconditions.checkArgument(offsetInBlock = 0  length = 0);
+  this.offsetInBlock = offsetInBlock;
+  this.length = length;
+}
+
+boolean include(long pos) {
+  return pos = offsetInBlock  pos  offsetInBlock + length;
+}
+  }
+
+  private static class BlockReaderInfo {
+final BlockReader reader;
+final DatanodeInfo datanode;
+/**
+ * when initializing block readers, their starting offsets are set to the 
same
+ * number: the smallest internal block offsets among all the readers. This 
is
+ * because it is possible that for some internal blocks we have to read
+ * backwards for decoding purpose. We thus use this offset array to track
+ * offsets for all the block readers so that we can skip data if necessary.
+ */
+long blockReaderOffset;
+LocatedBlock targetBlock;
+/**
+ * We use this field to indicate whether we should use this reader. In case
+ * 

[12/36] hadoop git commit: HDFS-7225. Remove stale block invalidation work when DN re-registers with different UUID. (Zhe Zhang and Andrew Wang) Moved to 2.6.1

2015-08-14 Thread zhz
HDFS-7225. Remove stale block invalidation work when DN re-registers with 
different UUID. (Zhe Zhang and Andrew Wang)
Moved to 2.6.1


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/08bd4edf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/08bd4edf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/08bd4edf

Branch: refs/heads/HDFS-7285-merge
Commit: 08bd4edf4092901273da0d73a5cc760fdc11052b
Parents: e7aa813
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Aug 14 12:38:00 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Aug 14 12:38:00 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08bd4edf/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1507cbe..dba4535 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1848,9 +1848,6 @@ Release 2.7.0 - 2015-04-20
 HDFS-7406. SimpleHttpProxyHandler puts incorrect Connection: Close
 header. (wheat9)
 
-HDFS-7225. Remove stale block invalidation work when DN re-registers with
-different UUID. (Zhe Zhang and Andrew Wang)
-
 HDFS-7374. Allow decommissioning of dead DataNodes. (Zhe Zhang)
 
 HDFS-7403. Inaccurate javadoc of BlockUCState#COMPLETE state. (
@@ -2339,6 +2336,9 @@ Release 2.6.1 - UNRELEASED
 HDFS-7263. Snapshot read can reveal future bytes for appended files.
 (Tao Luo via shv)
 
+HDFS-7225. Remove stale block invalidation work when DN re-registers with
+different UUID. (Zhe Zhang and Andrew Wang)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[32/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
new file mode 100644
index 000..3ed3e20
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.coder;
+
+import org.apache.hadoop.io.erasurecode.CodecUtil;
+import org.apache.hadoop.io.erasurecode.ECBlock;
+import org.apache.hadoop.io.erasurecode.ECBlockGroup;
+import org.apache.hadoop.io.erasurecode.ECSchema;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder;
+
+/**
+ * Reed-Solomon erasure encoder that encodes a block group.
+ *
+ * It implements {@link ErasureCoder}.
+ */
+public class RSErasureEncoder extends AbstractErasureEncoder {
+  private RawErasureEncoder rawEncoder;
+
+  public RSErasureEncoder(int numDataUnits, int numParityUnits) {
+super(numDataUnits, numParityUnits);
+  }
+
+  public RSErasureEncoder(ECSchema schema) {
+super(schema);
+  }
+
+  @Override
+  protected ErasureCodingStep prepareEncodingStep(final ECBlockGroup 
blockGroup) {
+
+RawErasureEncoder rawEncoder = checkCreateRSRawEncoder();
+
+ECBlock[] inputBlocks = getInputBlocks(blockGroup);
+
+return new ErasureEncodingStep(inputBlocks,
+getOutputBlocks(blockGroup), rawEncoder);
+  }
+
+  private RawErasureEncoder checkCreateRSRawEncoder() {
+if (rawEncoder == null) {
+  rawEncoder = CodecUtil.createRSRawEncoder(getConf(),
+  getNumDataUnits(), getNumParityUnits());
+}
+return rawEncoder;
+  }
+
+  @Override
+  public void release() {
+if (rawEncoder != null) {
+  rawEncoder.release();
+}
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecf36348/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
new file mode 100644
index 000..a847418
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.coder;
+
+import org.apache.hadoop.io.erasurecode.CodecUtil;
+import org.apache.hadoop.io.erasurecode.ECBlock;
+import org.apache.hadoop.io.erasurecode.ECBlockGroup;
+import org.apache.hadoop.io.erasurecode.ECSchema;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder;
+
+/**
+ * Xor erasure decoder that decodes a block group.
+ *
+ * It implements {@link ErasureCoder}.
+ */
+public class XORErasureDecoder extends AbstractErasureDecoder {
+
+  public XORErasureDecoder(int numDataUnits, int numParityUnits) {
+   

[33/36] hadoop git commit: HDFS-7285. Erasure Coding Support inside HDFS.

2015-08-14 Thread zhz
HDFS-7285. Erasure Coding Support inside HDFS.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ecf36348
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ecf36348
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ecf36348

Branch: refs/heads/HDFS-7285-merge
Commit: ecf3634830fdbf228c056e3d4ae77ae0def17683
Parents: 84bf712
Author: Zhe Zhang zhezh...@cloudera.com
Authored: Thu Aug 6 23:24:03 2015 -0700
Committer: Zhe Zhang zhezh...@cloudera.com
Committed: Fri Aug 14 10:54:43 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   74 +
 .../hadoop/fs/CommonConfigurationKeys.java  |   15 +
 .../org/apache/hadoop/fs/FSOutputSummer.java|4 +
 .../main/java/org/apache/hadoop/fs/FsShell.java |8 +-
 .../apache/hadoop/io/erasurecode/CodecUtil.java |  144 ++
 .../apache/hadoop/io/erasurecode/ECBlock.java   |   80 ++
 .../hadoop/io/erasurecode/ECBlockGroup.java |  100 ++
 .../apache/hadoop/io/erasurecode/ECChunk.java   |   87 ++
 .../apache/hadoop/io/erasurecode/ECSchema.java  |  257 
 .../hadoop/io/erasurecode/SchemaLoader.java |  152 +++
 .../erasurecode/codec/AbstractErasureCodec.java |   51 +
 .../io/erasurecode/codec/ErasureCodec.java  |   49 +
 .../io/erasurecode/codec/RSErasureCodec.java|   43 +
 .../io/erasurecode/codec/XORErasureCodec.java   |   44 +
 .../erasurecode/coder/AbstractErasureCoder.java |   62 +
 .../coder/AbstractErasureCodingStep.java|   59 +
 .../coder/AbstractErasureDecoder.java   |  167 +++
 .../coder/AbstractErasureEncoder.java   |   60 +
 .../io/erasurecode/coder/ErasureCoder.java  |   77 ++
 .../io/erasurecode/coder/ErasureCodingStep.java |   55 +
 .../erasurecode/coder/ErasureDecodingStep.java  |   52 +
 .../erasurecode/coder/ErasureEncodingStep.java  |   49 +
 .../io/erasurecode/coder/RSErasureDecoder.java  |   67 +
 .../io/erasurecode/coder/RSErasureEncoder.java  |   67 +
 .../io/erasurecode/coder/XORErasureDecoder.java |   86 ++
 .../io/erasurecode/coder/XORErasureEncoder.java |   53 +
 .../io/erasurecode/grouper/BlockGrouper.java|   90 ++
 .../rawcoder/AbstractRawErasureCoder.java   |  138 ++
 .../rawcoder/AbstractRawErasureDecoder.java |  207 +++
 .../rawcoder/AbstractRawErasureEncoder.java |  136 ++
 .../io/erasurecode/rawcoder/RSRawDecoder.java   |  216 +++
 .../io/erasurecode/rawcoder/RSRawEncoder.java   |   79 ++
 .../rawcoder/RSRawErasureCoderFactory.java  |   34 +
 .../erasurecode/rawcoder/RawErasureCoder.java   |   66 +
 .../rawcoder/RawErasureCoderFactory.java|   42 +
 .../erasurecode/rawcoder/RawErasureDecoder.java |   88 ++
 .../erasurecode/rawcoder/RawErasureEncoder.java |   64 +
 .../io/erasurecode/rawcoder/XORRawDecoder.java  |   83 ++
 .../io/erasurecode/rawcoder/XORRawEncoder.java  |   77 ++
 .../rawcoder/XORRawErasureCoderFactory.java |   34 +
 .../io/erasurecode/rawcoder/util/DumpUtil.java  |   85 ++
 .../erasurecode/rawcoder/util/GaloisField.java  |  561 
 .../io/erasurecode/rawcoder/util/RSUtil.java|   39 +
 .../hadoop/io/erasurecode/BufferAllocator.java  |   91 ++
 .../hadoop/io/erasurecode/TestCoderBase.java|  500 +++
 .../hadoop/io/erasurecode/TestECSchema.java |   51 +
 .../hadoop/io/erasurecode/TestSchemaLoader.java |   74 +
 .../erasurecode/coder/TestErasureCoderBase.java |  297 
 .../erasurecode/coder/TestRSErasureCoder.java   |  126 ++
 .../io/erasurecode/coder/TestXORCoder.java  |   64 +
 .../io/erasurecode/rawcoder/TestRSRawCoder.java |  118 ++
 .../rawcoder/TestRSRawCoderBase.java|   58 +
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  232 
 .../erasurecode/rawcoder/TestXORRawCoder.java   |   66 +
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml  |1 +
 .../hdfs/client/HdfsClientConfigKeys.java   |   12 +
 .../hadoop/hdfs/protocol/ClientProtocol.java|   27 +
 .../hadoop/hdfs/protocol/ErasureCodingZone.java |   66 +
 .../hadoop/hdfs/protocol/HdfsConstants.java |   11 +
 .../hadoop/hdfs/protocol/HdfsFileStatus.java|   16 +-
 .../hadoop/hdfs/protocol/LocatedBlock.java  |8 +-
 .../hadoop/hdfs/protocol/LocatedBlocks.java |   26 +-
 .../hdfs/protocol/LocatedStripedBlock.java  |   86 ++
 .../protocol/SnapshottableDirectoryStatus.java  |2 +-
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  |4 +-
 .../src/main/proto/ClientNamenodeProtocol.proto |7 +
 .../src/main/proto/erasurecoding.proto  |   71 +
 .../src/main/proto/hdfs.proto   |   36 +-
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  396 ++
 .../hadoop-hdfs/src/main/bin/hdfs   |6 +
 .../org/apache/hadoop/hdfs/BlockReader.java |   10 +-
 .../apache/hadoop/hdfs/BlockReaderLocal.java|5 +
 .../hadoop/hdfs/BlockReaderLocalLegacy.java |5 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  

[44/50] [abbrv] hadoop git commit: YARN-3993. Changed to use the AM flag in ContainerContext determine AM container in TestPerNodeTimelineCollectorsAuxService. Contributed by Sunil G.

2015-08-14 Thread vinodkv
YARN-3993. Changed to use the AM flag in ContainerContext determine AM 
container in TestPerNodeTimelineCollectorsAuxService. Contributed by Sunil G.

(cherry picked from commit 9e48f9ff2ce08f3dcdd8d60bacb697664b92196f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4a43f062
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4a43f062
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4a43f062

Branch: refs/heads/YARN-2928
Commit: 4a43f062edb72e68233d2113101fbfe965d4b953
Parents: f488b61
Author: Zhijie Shen zjs...@apache.org
Authored: Mon Aug 3 16:55:44 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:27 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt|  3 +++
 .../collector/PerNodeTimelineCollectorsAuxService.java | 13 +++--
 .../TestPerNodeTimelineCollectorsAuxService.java   |  5 +
 3 files changed, 11 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a43f062/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 60bd2fd..8f419e6 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -118,6 +118,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3908. Fixed bugs in HBaseTimelineWriterImpl. (Vrushali C and Sangjin
 Lee via zjshen)
 
+YARN-3993. Changed to use the AM flag in ContainerContext determine AM
+container in TestPerNodeTimelineCollectorsAuxService. (Sunil G via zjshen)
+
 Trunk - Unreleased
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a43f062/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
index 3ede97a..befaa83 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.yarn.server.api.AuxiliaryService;
 import org.apache.hadoop.yarn.server.api.ContainerContext;
 import org.apache.hadoop.yarn.server.api.ContainerInitializationContext;
 import org.apache.hadoop.yarn.server.api.ContainerTerminationContext;
+import org.apache.hadoop.yarn.server.api.ContainerType;
 
 import com.google.common.annotations.VisibleForTesting;
 
@@ -119,7 +120,7 @@ public class PerNodeTimelineCollectorsAuxService extends 
AuxiliaryService {
   public void initializeContainer(ContainerInitializationContext context) {
 // intercept the event of the AM container being created and initialize the
 // app level collector service
-if (isApplicationMaster(context)) {
+if (context.getContainerType() == ContainerType.APPLICATION_MASTER) {
   ApplicationId appId = context.getContainerId().
   getApplicationAttemptId().getApplicationId();
   addApplication(appId);
@@ -135,21 +136,13 @@ public class PerNodeTimelineCollectorsAuxService extends 
AuxiliaryService {
   public void stopContainer(ContainerTerminationContext context) {
 // intercept the event of the AM container being stopped and remove the app
 // level collector service
-if (isApplicationMaster(context)) {
+if (context.getContainerType() == ContainerType.APPLICATION_MASTER) {
   ApplicationId appId = context.getContainerId().
   getApplicationAttemptId().getApplicationId();
   removeApplication(appId);
 }
   }
 
-  private boolean isApplicationMaster(ContainerContext context) {
-// TODO this is based on a (shaky) assumption that the container id (the
-// last field of the full container id) for an AM is always 1
-// we want to make this much more reliable
-ContainerId containerId = context.getContainerId();
-return containerId.getContainerId() == 1L;
-  }
-
   @VisibleForTesting
   boolean 

[47/50] [abbrv] hadoop git commit: YARN-3906. Split the application table from the entity table. Contributed by Sangjin Lee.

2015-08-14 Thread vinodkv
http://git-wip-us.apache.org/repos/asf/hadoop/blob/940902ac/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineWriterImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineWriterImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineWriterImpl.java
index ab02779..95f88d1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineWriterImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineWriterImpl.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.client.Connection;
 import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
@@ -47,6 +48,10 @@ import 
org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
 import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumn;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumnPrefix;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationTable;
 import 
org.apache.hadoop.yarn.server.timelineservice.storage.apptoflow.AppToFlowTable;
 import org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator;
 import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineWriterUtils;
@@ -60,7 +65,15 @@ import org.junit.BeforeClass;
 import org.junit.Test;
 
 /**
- * @throws Exception
+ * Various tests to test writing entities to HBase and reading them back from
+ * it.
+ *
+ * It uses a single HBase mini-cluster for all tests which is a little more
+ * realistic, and helps test correctness in the presence of other data.
+ *
+ * Each test uses a different cluster name to be able to handle its own data
+ * even if other records exist in the table. Use a different cluster name if
+ * you add a new test.
  */
 public class TestHBaseTimelineWriterImpl {
 
@@ -78,6 +91,199 @@ public class TestHBaseTimelineWriterImpl {
 .createTable(util.getHBaseAdmin(), util.getConfiguration());
 new AppToFlowTable()
 .createTable(util.getHBaseAdmin(), util.getConfiguration());
+new ApplicationTable()
+.createTable(util.getHBaseAdmin(), util.getConfiguration());
+  }
+
+  @Test
+  public void testWriteApplicationToHBase() throws Exception {
+TimelineEntities te = new TimelineEntities();
+ApplicationEntity entity = new ApplicationEntity();
+String id = hello;
+entity.setId(id);
+Long cTime = 1425016501000L;
+Long mTime = 1425026901000L;
+entity.setCreatedTime(cTime);
+entity.setModifiedTime(mTime);
+
+// add the info map in Timeline Entity
+MapString, Object infoMap = new HashMapString, Object();
+infoMap.put(infoMapKey1, infoMapValue1);
+infoMap.put(infoMapKey2, 10);
+entity.addInfo(infoMap);
+
+// add the isRelatedToEntity info
+String key = task;
+String value = is_related_to_entity_id_here;
+SetString isRelatedToSet = new HashSetString();
+isRelatedToSet.add(value);
+MapString, SetString isRelatedTo = new HashMapString, SetString();
+isRelatedTo.put(key, isRelatedToSet);
+entity.setIsRelatedToEntities(isRelatedTo);
+
+// add the relatesTo info
+key = container;
+value = relates_to_entity_id_here;
+SetString relatesToSet = new HashSetString();
+relatesToSet.add(value);
+value = relates_to_entity_id_here_Second;
+relatesToSet.add(value);
+MapString, SetString relatesTo = new HashMapString, SetString();
+relatesTo.put(key, relatesToSet);
+entity.setRelatesToEntities(relatesTo);
+
+// add some config entries
+MapString, String conf = new HashMapString, String();
+conf.put(config_param1, value1);
+conf.put(config_param2, value2);
+entity.addConfigs(conf);
+
+// add metrics
+ 

[31/50] [abbrv] hadoop git commit: YARN-3276. Code cleanup for timeline service API records. Contributed by Junping Du.

2015-08-14 Thread vinodkv
YARN-3276. Code cleanup for timeline service API records. Contributed by 
Junping Du.

(cherry picked from commit d88f30ba5359f59fb71b93a55e1c1d9a1c0dff8e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b94e818
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b94e818
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b94e818

Branch: refs/heads/YARN-2928
Commit: 7b94e81838860cd64eb20357ff9d206b77bfe963
Parents: 09a9b13
Author: Zhijie Shen zjs...@apache.org
Authored: Wed Jun 3 15:13:29 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:25 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../api/records/timeline/TimelineEntity.java| 21 ++
 .../api/records/timeline/TimelineEvent.java |  8 +--
 .../records/timelineservice/TimelineEntity.java | 33 ++---
 .../records/timelineservice/TimelineEvent.java  |  7 +-
 .../records/timelineservice/TimelineMetric.java |  2 +-
 .../hadoop/yarn/util/TimelineServiceHelper.java | 47 
 .../impl/pb/AllocateResponsePBImpl.java |  4 +-
 .../yarn/util/TestTimelineServiceHelper.java| 76 
 9 files changed, 147 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b94e818/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a74f8cd..a3287f0 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -84,6 +84,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 
   IMPROVEMENTS
 
+YARN-3276. Code cleanup for timeline service API records. (Junping Du via
+zjshen)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b94e818/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineEntity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineEntity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineEntity.java
index a43259b..e695050 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineEntity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineEntity.java
@@ -34,6 +34,7 @@ import javax.xml.bind.annotation.XmlRootElement;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
+import org.apache.hadoop.yarn.util.TimelineServiceHelper;
 
 /**
  * p
@@ -231,11 +232,8 @@ public class TimelineEntity implements 
ComparableTimelineEntity {
*/
   public void setRelatedEntities(
   MapString, SetString relatedEntities) {
-if (relatedEntities != null  !(relatedEntities instanceof HashMap)) {
-  this.relatedEntities = new HashMapString, SetString(relatedEntities);
-} else {
-  this.relatedEntities = (HashMapString, SetString) relatedEntities;
-}
+this.relatedEntities = TimelineServiceHelper.mapCastToHashMap(
+relatedEntities);
   }
 
   /**
@@ -297,11 +295,8 @@ public class TimelineEntity implements 
ComparableTimelineEntity {
*  a map of primary filters
*/
   public void setPrimaryFilters(MapString, SetObject primaryFilters) {
-if (primaryFilters != null  !(primaryFilters instanceof HashMap)) {
-  this.primaryFilters = new HashMapString, SetObject(primaryFilters);
-} else {
-  this.primaryFilters = (HashMapString, SetObject) primaryFilters;
-}
+this.primaryFilters =
+TimelineServiceHelper.mapCastToHashMap(primaryFilters);
   }
 
   /**
@@ -350,11 +345,7 @@ public class TimelineEntity implements 
ComparableTimelineEntity {
*  a map of other information
*/
   public void setOtherInfo(MapString, Object otherInfo) {
-if (otherInfo != null  !(otherInfo instanceof HashMap)) {
-  this.otherInfo = new HashMapString, Object(otherInfo);
-} else {
-  this.otherInfo = (HashMapString, Object) otherInfo;
-}
+this.otherInfo = TimelineServiceHelper.mapCastToHashMap(otherInfo);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b94e818/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineEvent.java

[01/50] [abbrv] hadoop git commit: YARN-3240. Implement client API to put generic entities. Contributed by Zhijie Shen

2015-08-14 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 bcd755eba - f40c73548 (forced update)


YARN-3240. Implement client API to put generic entities. Contributed by Zhijie 
Shen

(cherry picked from commit 4487da249f448d5c67b712cd0aa723e764eed77d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6b7c5538
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6b7c5538
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6b7c5538

Branch: refs/heads/YARN-2928
Commit: 6b7c553833f12fcc2e64d8fb62b7c6d04e297f27
Parents: 4f38ace
Author: Junping Du junping...@apache.org
Authored: Wed Feb 25 02:40:55 2015 -0800
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:21 2015 -0700

--
 hadoop-project/pom.xml  |   7 ++
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../timelineservice/TimelineEntities.java   |  58 ++
 .../hadoop-yarn/hadoop-yarn-common/pom.xml  |   1 +
 .../hadoop/yarn/client/api/TimelineClient.java  |  60 +-
 .../client/api/impl/TimelineClientImpl.java | 110 ---
 .../TestTimelineServiceRecords.java |   7 ++
 .../hadoop-yarn-server-tests/pom.xml|  12 ++
 .../TestTimelineServiceClientIntegration.java   |  54 +
 .../hadoop-yarn-server-timelineservice/pom.xml  |  11 ++
 .../aggregator/BaseAggregatorService.java   |   7 +-
 .../aggregator/PerNodeAggregatorServer.java |   9 +-
 .../aggregator/PerNodeAggregatorWebService.java |  54 +
 13 files changed, 344 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b7c5538/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 20eee21..acc021d 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -294,6 +294,13 @@
 version${project.version}/version
   /dependency
 
+  dependency
+  groupIdorg.apache.hadoop/groupId
+  artifactIdhadoop-yarn-server-timelineservice/artifactId
+  version${project.version}/version
+  typetest-jar/type
+  /dependency
+
  dependency
 groupIdorg.apache.hadoop/groupId
 artifactIdhadoop-yarn-applications-distributedshell/artifactId

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b7c5538/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index ad8a5dc..a768480 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -14,6 +14,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3041. Added the overall data model of timeline service next gen.
 (zjshen)
 
+YARN-3240. Implement client API to put generic entities. (Zhijie Shen via
+junping_du)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b7c5538/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntities.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntities.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntities.java
new file mode 100644
index 000..39504cc
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntities.java
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.api.records.timelineservice;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import 

[30/50] [abbrv] hadoop git commit: YARN-3801. [JDK-8] Exclude jdk.tools from hbase-client and hbase-testing-util (Tsuyoshi Ozawa via sjlee)

2015-08-14 Thread vinodkv
YARN-3801. [JDK-8] Exclude jdk.tools from hbase-client and hbase-testing-util 
(Tsuyoshi Ozawa via sjlee)

(cherry picked from commit a1bb9137af84a34bde799f45e7ab8a21e33d55e0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/566824e6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/566824e6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/566824e6

Branch: refs/heads/YARN-2928
Commit: 566824e61e6825d8512260b8d7558e63e39b1d47
Parents: 30ee734
Author: Sangjin Lee sj...@apache.org
Authored: Mon Jun 15 21:15:33 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:25 2015 -0700

--
 hadoop-project/pom.xml  | 10 ++
 hadoop-yarn-project/CHANGES.txt |  3 +++
 2 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/566824e6/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 1182d6d..faac713 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1000,6 +1000,12 @@
 groupIdorg.apache.hbase/groupId
 artifactIdhbase-client/artifactId
 version${hbase.version}/version
+exclusions
+  exclusion
+artifactIdjdk.tools/artifactId
+groupIdjdk.tools/groupId
+  /exclusion
+/exclusions
   /dependency
   dependency
 groupIdorg.apache.phoenix/groupId
@@ -1046,6 +1052,10 @@
 groupIdorg.apache.hadoop/groupId
 artifactIdhadoop-minicluster/artifactId
   /exclusion
+  exclusion
+artifactIdjdk.tools/artifactId
+groupIdjdk.tools/groupId
+  /exclusion
 /exclusions
   /dependency
 /dependencies

http://git-wip-us.apache.org/repos/asf/hadoop/blob/566824e6/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0777525..17d8b7b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -85,6 +85,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3044. Made RM write app, attempt and optional container lifecycle
 events to timeline service v2. (Naganarasimha G R via zjshen)
 
+YARN-3801. [JDK-8] Exclude jdk.tools from hbase-client and
+hbase-testing-util (Tsuyoshi Ozawa via sjlee)
+
   IMPROVEMENTS
 
 YARN-3276. Code cleanup for timeline service API records. (Junping Du via



[03/50] [abbrv] hadoop git commit: YARN-3125. Made the distributed shell use timeline service next gen and add an integration test for it. Contributed by Junping Du and Li Lu.

2015-08-14 Thread vinodkv
YARN-3125. Made the distributed shell use timeline service next gen and add an 
integration test for it. Contributed by Junping Du and Li Lu.

(cherry picked from commit bf08f7f0ed4900ce52f98137297dd1a47ba2a536)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0f02ccf3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0f02ccf3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0f02ccf3

Branch: refs/heads/YARN-2928
Commit: 0f02ccf35068caa5b5e35fa145b039a2a39a6e64
Parents: ca324e8
Author: Zhijie Shen zjs...@apache.org
Authored: Fri Feb 27 08:46:42 2015 -0800
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:21 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../distributedshell/ApplicationMaster.java | 169 +--
 .../applications/distributedshell/Client.java   |  20 ++-
 .../distributedshell/TestDistributedShell.java  |  79 +++--
 .../aggregator/BaseAggregatorService.java   |   6 +
 .../aggregator/PerNodeAggregatorServer.java |   4 +-
 6 files changed, 256 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f02ccf3/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index c8a4d48..97f2b7b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -20,6 +20,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3087. Made the REST server of per-node aggregator work alone in NM
 daemon. (Li Lu via zjshen)
 
+YARN-3125. Made the distributed shell use timeline service next gen and
+add an integration test for it. (Junping Du and Li Lu via zjshen)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f02ccf3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
index 5d2d6c2..fcf1556 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
@@ -212,6 +212,8 @@ public class ApplicationMaster {
   private int appMasterRpcPort = -1;
   // Tracking url to which app master publishes info for clients to monitor
   private String appMasterTrackingUrl = ;
+  
+  private boolean newTimelineService = false;
 
   // App Master configuration
   // No. of containers to run shell command on
@@ -372,7 +374,8 @@ public class ApplicationMaster {
 No. of containers on which the shell command needs to be executed);
 opts.addOption(priority, true, Application Priority. Default 0);
 opts.addOption(debug, false, Dump out debug information);
-
+opts.addOption(timeline_service_version, true, 
+Version for timeline service);
 opts.addOption(help, false, Print usage);
 CommandLine cliParser = new GnuParser().parse(opts, args);
 
@@ -508,6 +511,30 @@ public class ApplicationMaster {
 }
 requestPriority = Integer.parseInt(cliParser
 .getOptionValue(priority, 0));
+
+if (conf.getBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED,
+  YarnConfiguration.DEFAULT_TIMELINE_SERVICE_ENABLED)) {
+  if (cliParser.hasOption(timeline_service_version)) {
+String timelineServiceVersion = 
+cliParser.getOptionValue(timeline_service_version, v1);
+if (timelineServiceVersion.trim().equalsIgnoreCase(v1)) {
+  newTimelineService = false;
+} else if (timelineServiceVersion.trim().equalsIgnoreCase(v2)) {
+  newTimelineService = true;
+} else {
+  throw new IllegalArgumentException(
+  timeline_service_version is not set properly, should be 'v1' or 
'v2');
+}
+  }
+} else {
+  timelineClient = null;
+  LOG.warn(Timeline service is not enabled);
+  if (cliParser.hasOption(timeline_service_version)) {
+

[22/50] [abbrv] hadoop git commit: MAPREDUCE-6327. Made MR AM use timeline service v2 API to write history events and counters. Contributed by Junping Du.

2015-08-14 Thread vinodkv
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee9401c0/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRTimelineEventHandling.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRTimelineEventHandling.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRTimelineEventHandling.java
index eab9026..b3ea26e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRTimelineEventHandling.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRTimelineEventHandling.java
@@ -18,22 +18,45 @@
 
 package org.apache.hadoop.mapred;
 
+import java.io.File;
+import java.io.IOException;
+
+import java.util.EnumSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.mapreduce.MRJobConfig;
 import org.apache.hadoop.mapreduce.jobhistory.EventType;
 import org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler;
 import org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ApplicationReport;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.YarnApplicationState;
+import org.apache.hadoop.yarn.client.api.YarnClient;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.timeline.TimelineStore;
+import 
org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl;
+import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
 
 import org.junit.Assert;
 import org.junit.Test;
 
 public class TestMRTimelineEventHandling {
 
+  private static final String TIMELINE_AUX_SERVICE_NAME = timeline_collector;
+  private static final Log LOG =
+LogFactory.getLog(TestMRTimelineEventHandling.class);
+  
   @Test
   public void testTimelineServiceStartInMiniCluster() throws Exception {
 Configuration conf = new YarnConfiguration();
@@ -47,7 +70,7 @@ public class TestMRTimelineEventHandling {
 MiniMRYarnCluster cluster = null;
 try {
   cluster = new MiniMRYarnCluster(
-  TestJobHistoryEventHandler.class.getSimpleName(), 1);
+TestMRTimelineEventHandling.class.getSimpleName(), 1);
   cluster.init(conf);
   cluster.start();
 
@@ -88,7 +111,7 @@ public class TestMRTimelineEventHandling {
 MiniMRYarnCluster cluster = null;
 try {
   cluster = new MiniMRYarnCluster(
-  TestJobHistoryEventHandler.class.getSimpleName(), 1);
+TestMRTimelineEventHandling.class.getSimpleName(), 1);
   cluster.init(conf);
   cluster.start();
   TimelineStore ts = cluster.getApplicationHistoryServer()
@@ -132,6 +155,140 @@ public class TestMRTimelineEventHandling {
   }
 }
   }
+  
+  @Test
+  public void testMRNewTimelineServiceEventHandling() throws Exception {
+LOG.info(testMRNewTimelineServiceEventHandling start.);
+Configuration conf = new YarnConfiguration();
+conf.setBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED, true);
+conf.setBoolean(MRJobConfig.MAPREDUCE_JOB_EMIT_TIMELINE_DATA, true);
+
+// enable new timeline serivce in MR side
+conf.setBoolean(MRJobConfig.MAPREDUCE_JOB_NEW_TIMELINE_SERVICE_ENABLED, 
true);
+
+// enable aux-service based timeline collectors
+conf.set(YarnConfiguration.NM_AUX_SERVICES, TIMELINE_AUX_SERVICE_NAME);
+conf.set(YarnConfiguration.NM_AUX_SERVICES + . + 
TIMELINE_AUX_SERVICE_NAME
+  + .class, PerNodeTimelineCollectorsAuxService.class.getName());
+
+conf.setBoolean(YarnConfiguration.SYSTEM_METRICS_PUBLISHER_ENABLED, true);
+
+MiniMRYarnCluster cluster = null;
+try {
+  cluster = new MiniMRYarnCluster(
+  TestMRTimelineEventHandling.class.getSimpleName(), 1, true);
+  cluster.init(conf);
+  cluster.start();
+  LOG.info(A MiniMRYarnCluster get start.);
+
+  Path inDir = new Path(input);
+  Path outDir = new Path(output);
+  LOG.info(Run 1st job which should be successful.);
+  RunningJob job =
+  UtilsForTests.runJobSucceed(new JobConf(conf), inDir, outDir);
+  

[36/50] [abbrv] hadoop git commit: YARN-3949. Ensure timely flush of timeline writes. Contributed by Sangjin Lee.

2015-08-14 Thread vinodkv
YARN-3949. Ensure timely flush of timeline writes. Contributed by Sangjin Lee.

(cherry picked from commit 967bef7e0396d857913caa2574afb103a5f0b81b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a0c1e505
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a0c1e505
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a0c1e505

Branch: refs/heads/YARN-2928
Commit: a0c1e505437169324e4d20797ea016f82bf395d8
Parents: d5ca631
Author: Junping Du junping...@apache.org
Authored: Sat Jul 25 10:30:29 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:26 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  9 +++
 .../src/main/resources/yarn-default.xml | 17 -
 .../collector/TimelineCollectorManager.java | 65 ++--
 .../storage/FileSystemTimelineWriterImpl.java   |  5 ++
 .../storage/HBaseTimelineWriterImpl.java|  6 ++
 .../storage/PhoenixTimelineWriterImpl.java  |  5 ++
 .../timelineservice/storage/TimelineWriter.java |  9 +++
 .../TestNMTimelineCollectorManager.java |  5 ++
 9 files changed, 119 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c1e505/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a508146..cd05140 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -79,6 +79,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3047. [Data Serving] Set up ATS reader with basic request serving
 structure and lifecycle (Varun Saxena via sjlee)
 
+YARN-3949. Ensure timely flush of timeline writes. (Sangjin Lee via
+junping_du)
+
   IMPROVEMENTS
 
 YARN-3276. Code cleanup for timeline service API records. (Junping Du via

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c1e505/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 5d5eb10..cec2760 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1455,6 +1455,15 @@ public class YarnConfiguration extends Configuration {
   public static final String TIMELINE_SERVICE_READER_CLASS =
   TIMELINE_SERVICE_PREFIX + reader.class;
 
+  /** The setting that controls how often the timeline collector flushes the
+   * timeline writer.
+   */
+  public static final String TIMELINE_SERVICE_WRITER_FLUSH_INTERVAL_SECONDS =
+  TIMELINE_SERVICE_PREFIX + writer.flush-interval-seconds;
+
+  public static final int
+  DEFAULT_TIMELINE_SERVICE_WRITER_FLUSH_INTERVAL_SECONDS = 60;
+
   // mark app-history related configs @Private as application history is going
   // to be integrated into the timeline service
   @Private

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0c1e505/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index d15d675..7b5941c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -758,7 +758,15 @@
 nameyarn.system-metrics-publisher.enabled/name
 valuefalse/value
   /property
- 
+
+  property
+descriptionThe setting that controls whether yarn container metrics is
+published to the timeline server or not by RM. This configuration setting 
is
+for ATS V2./description
+nameyarn.rm.system-metrics-publisher.emit-container-events/name
+valuefalse/value
+  /property
+
 
   property
 descriptionNumber of worker threads that send the yarn system metrics
@@ -1867,6 +1875,13 @@
 value${hadoop.tmp.dir}/yarn/timeline/value
   /property
 
+  property
+descriptionThe setting that controls how often the timeline collector
+flushes the timeline writer./description
+

[34/50] [abbrv] hadoop git commit: YARN-3721. build is broken on YARN-2928 branch due to possible dependency cycle (Li Lu via sjlee)

2015-08-14 Thread vinodkv
YARN-3721. build is broken on YARN-2928 branch due to possible dependency cycle 
(Li Lu via sjlee)

(cherry picked from commit a9738ceb17b50cce8844fd42bb800c7f83f15caf)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/09a9b13a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/09a9b13a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/09a9b13a

Branch: refs/heads/YARN-2928
Commit: 09a9b13a6579f7f1d2189837d8734c3f58b016e2
Parents: 5d6eca2
Author: Sangjin Lee sj...@apache.org
Authored: Thu May 28 12:03:53 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Aug 14 11:23:25 2015 -0700

--
 hadoop-project/pom.xml  | 97 ++--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../hadoop-yarn-server-timelineservice/pom.xml  |  1 -
 3 files changed, 53 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/09a9b13a/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 7a35939..1182d6d 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -996,55 +996,58 @@
 /exclusions
   /dependency
 
-dependency
-  groupIdorg.apache.hbase/groupId
-  artifactIdhbase-client/artifactId
-  version${hbase.version}/version
-/dependency
-dependency
-  groupIdorg.apache.phoenix/groupId
-  artifactIdphoenix-core/artifactId
-  version${phoenix.version}/version
-  exclusions
-!-- Exclude jline from here --
-exclusion
-  artifactIdjline/artifactId
-  groupIdjline/groupId
-/exclusion
-  /exclusions
-/dependency
-dependency
-  groupIdorg.apache.phoenix/groupId
-  artifactIdphoenix-core/artifactId
-  typetest-jar/type
-  version${phoenix.version}/version
-  scopetest/scope
-/dependency
-dependency
-  groupIdorg.apache.hbase/groupId
-  artifactIdhbase-it/artifactId
-  version${hbase.version}/version
-  scopetest/scope
-  classifiertests/classifier
-/dependency
-dependency
-  groupIdorg.apache.hbase/groupId
-  artifactIdhbase-testing-util/artifactId
-  version${hbase.version}/version
-  scopetest/scope
-  optionaltrue/optional
-  exclusions
-exclusion
-  groupIdorg.jruby/groupId
-  artifactIdjruby-complete/artifactId
-/exclusion
-exclusion
+  dependency
+groupIdorg.apache.hbase/groupId
+artifactIdhbase-client/artifactId
+version${hbase.version}/version
+  /dependency
+  dependency
+groupIdorg.apache.phoenix/groupId
+artifactIdphoenix-core/artifactId
+version${phoenix.version}/version
+exclusions
+  !-- Exclude jline from here --
+  exclusion
+artifactIdjline/artifactId
+groupIdjline/groupId
+  /exclusion
+/exclusions
+  /dependency
+  dependency
+groupIdorg.apache.phoenix/groupId
+artifactIdphoenix-core/artifactId
+typetest-jar/type
+version${phoenix.version}/version
+scopetest/scope
+  /dependency
+  dependency
+groupIdorg.apache.hbase/groupId
+artifactIdhbase-it/artifactId
+version${hbase.version}/version
+scopetest/scope
+classifiertests/classifier
+  /dependency
+  dependency
+groupIdorg.apache.hbase/groupId
+artifactIdhbase-testing-util/artifactId
+version${hbase.version}/version
+scopetest/scope
+optionaltrue/optional
+exclusions
+  exclusion
+groupIdorg.jruby/groupId
+artifactIdjruby-complete/artifactId
+  /exclusion
+  exclusion
 groupIdorg.apache.hadoop/groupId
 artifactIdhadoop-hdfs/artifactId
-/exclusion
-  /exclusions
-/dependency
-
+  /exclusion
+  exclusion
+groupIdorg.apache.hadoop/groupId
+artifactIdhadoop-minicluster/artifactId
+  /exclusion
+/exclusions
+  /dependency
 /dependencies
   /dependencyManagement
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/09a9b13a/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 6eb8e4b..a74f8cd 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -79,6 +79,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3726. Fix TestHBaseTimelineWriterImpl unit test failure by fixing its
 test data (Vrushali C via sjlee)
 
+  

  1   2   3   >