hadoop git commit: HDFS-8412. Fix the test failures in HTTPFS: In some tests setReplication called after fs close. Contributed by Uma Maheswara Rao G.

2015-05-18 Thread umamahesh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 363c35541 - a6af0248e


HDFS-8412. Fix the test failures in HTTPFS: In some tests setReplication called 
after fs close. Contributed by Uma Maheswara Rao G.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a6af0248
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a6af0248
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a6af0248

Branch: refs/heads/trunk
Commit: a6af0248e9ec75e8e46ac96593070e0c9841a660
Parents: 363c355
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Mon May 18 19:35:53 2015 +0530
Committer: Uma Maheswara Rao G umamah...@apache.org
Committed: Mon May 18 19:35:53 2015 +0530

--
 .../java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java  | 2 +-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt| 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6af0248/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
index 2cc67d4..0e082cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
@@ -465,8 +465,8 @@ public abstract class BaseTestHttpFSWith extends 
HFSTestCase {
 OutputStream os = fs.create(path);
 os.write(1);
 os.close();
-fs.close();
 fs.setReplication(path, (short) 2);
+fs.close();
 
 fs = getHttpFSFileSystem();
 fs.setReplication(path, (short) 1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6af0248/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 3e0d360..8d0c5b6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -332,6 +332,8 @@ Trunk (Unreleased)
 
 HDFS-8332. DFS client API calls should check filesystem closed (Rakesh R 
via umamahesh)
 
+HDFS-8412. Fix the test failures in HTTPFS. (umamahesh)
+
 Release 2.8.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES



hadoop git commit: HADOOP-10582. Fix the test case for copying to non-existent dir in TestFsShellCopy. Contributed by Kousuke Saruta.

2015-05-18 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b26d99c39 - 8969a5d45


HADOOP-10582. Fix the test case for copying to non-existent dir in 
TestFsShellCopy. Contributed by Kousuke Saruta.

(cherry picked from commit a46506d99cb1310c0e446d590f36fb9afae0fa60)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8969a5d4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8969a5d4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8969a5d4

Branch: refs/heads/branch-2
Commit: 8969a5d45a425e02ced9d09a26624f15643b5b3a
Parents: b26d99c
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon May 18 16:31:41 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Mon May 18 16:32:22 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt| 3 +++
 .../src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java| 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8969a5d4/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8a03675..117bca2 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -230,6 +230,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11988. Fix typo in the document for hadoop fs -find.
 (Kengo Seki via aajisaka)
 
+HADOOP-10582. Fix the test case for copying to non-existent dir in
+TestFsShellCopy. (Kousuke Saruta via aajisaka)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8969a5d4/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
index bef0c9f..c0a6b20 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
@@ -177,9 +177,9 @@ public class TestFsShellCopy {
   checkPut(0, srcPath, dstPath, useWindowsPath);
 }
 
-// copy to non-existent subdir
-prepPut(childPath, false, false);
-checkPut(1, srcPath, dstPath, useWindowsPath);
+// copy to non-existent dir
+prepPut(dstPath, false, false);
+checkPut(1, srcPath, childPath, useWindowsPath);
 
 // copy into dir, then with another name
 prepPut(dstPath, true, true);



hadoop git commit: HADOOP-11884. test-patch.sh should pull the real findbugs version (Kengo Seki via aw)

2015-05-18 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk a6af0248e - 182d86dac


HADOOP-11884. test-patch.sh should pull the real findbugs version  (Kengo Seki 
via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/182d86da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/182d86da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/182d86da

Branch: refs/heads/trunk
Commit: 182d86dac04a2168b1888af34f0a7042379d7e53
Parents: a6af024
Author: Allen Wittenauer a...@apache.org
Authored: Mon May 18 16:08:49 2015 +
Committer: Allen Wittenauer a...@apache.org
Committed: Mon May 18 16:08:49 2015 +

--
 dev-support/test-patch.sh   | 5 +++--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/182d86da/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index 9cc5bb0..00a638c 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -1859,8 +1859,6 @@ function check_findbugs
 return 1
   fi
 
-  findbugs_version=$(${FINDBUGS_HOME}/bin/findbugs -version)
-
   for module in ${modules}
   do
 pushd ${module} /dev/null
@@ -1872,6 +1870,9 @@ function check_findbugs
 popd /dev/null
   done
 
+  #shellcheck disable=SC2016
+  findbugs_version=$(${AWK} 'match($0, /findbugs-maven-plugin:[^:]*:findbugs/) 
{ print substr($0, RSTART + 22, RLENGTH - 31); exit }' 
${PATCH_DIR}/patchFindBugsOutput${module_suffix}.txt)
+
   if [[ ${rc} -ne 0 ]]; then
 add_jira_table -1 findbugs The patch appears to cause Findbugs (version 
${findbugs_version}) to fail.
 return 1

http://git-wip-us.apache.org/repos/asf/hadoop/blob/182d86da/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2138334..1c2cdaa 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -575,6 +575,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11939. Deprecate DistCpV1 and Logalyzer.
 (Brahma Reddy Battula via aajisaka)
 
+HADOOP-11884. test-patch.sh should pull the real findbugs version
+(Kengo Seki via aw)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp



hadoop git commit: HADOOP-11944. add option to test-patch to avoid relocating patch process directory (Sean Busbey via aw)

2015-05-18 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d8920 - 10c922b5c


HADOOP-11944. add option to test-patch to avoid relocating patch process 
directory (Sean Busbey via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/10c922b5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/10c922b5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/10c922b5

Branch: refs/heads/branch-2
Commit: 10c922b5c6b9282061ca8e4ae7f4b68012a3b4a3
Parents: d89
Author: Allen Wittenauer a...@apache.org
Authored: Mon May 18 16:13:50 2015 +
Committer: Allen Wittenauer a...@apache.org
Committed: Mon May 18 16:14:24 2015 +

--
 dev-support/test-patch.sh   | 28 +++-
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 2 files changed, 18 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/10c922b5/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index 00a638c..ae74c5b 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -38,6 +38,7 @@ function setup_defaults
   HOW_TO_CONTRIBUTE=https://wiki.apache.org/hadoop/HowToContribute;
   JENKINS=false
   BASEDIR=$(pwd)
+  RELOCATE_PATCH_DIR=false
 
   FINDBUGS_HOME=${FINDBUGS_HOME:-}
   ECLIPSE_HOME=${ECLIPSE_HOME:-}
@@ -607,6 +608,7 @@ function hadoop_usage
   echo --eclipse-home=path  Eclipse home directory (default ECLIPSE_HOME 
environment variable)
   echo --jira-cmd=cmd   The 'jira' command to use (default 'jira')
   echo --jira-password=pw   The password for the 'jira' command
+  echo --mv-patch-dir Move the patch-dir into the basedir during 
cleanup.
   echo --wget-cmd=cmd   The 'wget' command to use (default 'wget')
 }
 
@@ -692,6 +694,9 @@ function parse_args
   --mvn-cmd=*)
 MVN=${i#*=}
   ;;
+  --mv-patch-dir)
+RELOCATE_PATCH_DIR=true;
+  ;;
   --offline)
 OFFLINE=true
   ;;
@@ -2323,19 +2328,16 @@ function cleanup_and_exit
 {
   local result=$1
 
-  if [[ ${JENKINS} == true ]] ; then
-if [[ -e ${PATCH_DIR} ]] ; then
-  if [[ -d ${PATCH_DIR} ]]; then
-# if PATCH_DIR is already inside BASEDIR, then
-# there is no need to move it since we assume that
-# Jenkins or whatever already knows where it is at
-# since it told us to put it there!
-relative_patchdir /dev/null
-if [[ $? == 1 ]]; then
-  hadoop_debug mv ${PATCH_DIR} ${BASEDIR}
-  mv ${PATCH_DIR} ${BASEDIR}
-fi
-  fi
+  if [[ ${JENKINS} == true  ${RELOCATE_PATCH_DIR} == true  \
+  -e ${PATCH_DIR}  -d ${PATCH_DIR} ]] ; then
+# if PATCH_DIR is already inside BASEDIR, then
+# there is no need to move it since we assume that
+# Jenkins or whatever already knows where it is at
+# since it told us to put it there!
+relative_patchdir /dev/null
+if [[ $? == 1 ]]; then
+  hadoop_debug mv ${PATCH_DIR} ${BASEDIR}
+  mv ${PATCH_DIR} ${BASEDIR}
 fi
   fi
   big_console_header Finished build.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/10c922b5/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index ec11a02..6ca3f1e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -107,6 +107,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11884. test-patch.sh should pull the real findbugs version
 (Kengo Seki via aw)
 
+HADOOP-11944. add option to test-patch to avoid relocating patch process
+directory (Sean Busbey via aw)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp



[13/50] hadoop git commit: HDFS-8033. Erasure coding: stateful (non-positional) read from files in striped layout. Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-8033. Erasure coding: stateful (non-positional) read from files in striped 
layout. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c79ec3c4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c79ec3c4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c79ec3c4

Branch: refs/heads/HDFS-7285
Commit: c79ec3c467a5e275a6a4e0635560df946d3055a6
Parents: 98ebb4e
Author: Zhe Zhang z...@apache.org
Authored: Fri Apr 24 22:36:15 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:48 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  55 ++--
 .../hadoop/hdfs/DFSStripedInputStream.java  | 311 ++-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  43 +++
 .../apache/hadoop/hdfs/TestReadStripedFile.java | 110 ++-
 5 files changed, 465 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c79ec3c4/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index cf41a9b..e8db485 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -131,3 +131,6 @@
 
 HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may 
cause 
 block id conflicts (Jing Zhao via Zhe Zhang)
+
+HDFS-8033. Erasure coding: stateful (non-positional) read from files in 
+striped layout (Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c79ec3c4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index 16250dd..6eb25d0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -95,34 +95,34 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   public static boolean tcpReadsDisabledForTesting = false;
   private long hedgedReadOpsLoopNumForTesting = 0;
   protected final DFSClient dfsClient;
-  private AtomicBoolean closed = new AtomicBoolean(false);
-  private final String src;
-  private final boolean verifyChecksum;
+  protected AtomicBoolean closed = new AtomicBoolean(false);
+  protected final String src;
+  protected final boolean verifyChecksum;
 
   // state by stateful read only:
   // (protected by lock on this)
   /
   private DatanodeInfo currentNode = null;
-  private LocatedBlock currentLocatedBlock = null;
-  private long pos = 0;
-  private long blockEnd = -1;
+  protected LocatedBlock currentLocatedBlock = null;
+  protected long pos = 0;
+  protected long blockEnd = -1;
   private BlockReader blockReader = null;
   
 
   // state shared by stateful and positional read:
   // (protected by lock on infoLock)
   
-  private LocatedBlocks locatedBlocks = null;
+  protected LocatedBlocks locatedBlocks = null;
   private long lastBlockBeingWrittenLength = 0;
   private FileEncryptionInfo fileEncryptionInfo = null;
-  private CachingStrategy cachingStrategy;
+  protected CachingStrategy cachingStrategy;
   
 
-  private final ReadStatistics readStatistics = new ReadStatistics();
+  protected final ReadStatistics readStatistics = new ReadStatistics();
   // lock for state shared between read and pread
   // Note: Never acquire a lock on this with this lock held to avoid 
deadlocks
   //   (it's OK to acquire this lock when the lock on this is held)
-  private final Object infoLock = new Object();
+  protected final Object infoLock = new Object();
 
   /**
* Track the ByteBuffers that we have handed out to readers.
@@ -239,7 +239,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
* back to the namenode to get a new list of block locations, and is
* capped at maxBlockAcquireFailures
*/
-  private int failures = 0;
+  protected int failures = 0;
 
   /* XXX Use of CocurrentHashMap is temp fix. Need to fix 
* parallel accesses to DFSInputStream (through ptreads) properly */
@@ -476,7 +476,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   }
 
   /** Fetch a block from namenode and cache it */
-  private void fetchBlockAt(long offset) throws IOException {
+  protected void fetchBlockAt(long 

[02/50] hadoop git commit: HDFS-8145. Fix the editlog corruption exposed by failed TestAddStripedBlocks. Contributed by Jing Zhao.

2015-05-18 Thread zhz
HDFS-8145. Fix the editlog corruption exposed by failed TestAddStripedBlocks. 
Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7541921c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7541921c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7541921c

Branch: refs/heads/HDFS-7285
Commit: 7541921c83353a1b5116038d8ebce3ef574689a9
Parents: 9d26027
Author: Jing Zhao ji...@apache.org
Authored: Fri Apr 17 18:13:47 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:45 2015 -0700

--
 .../blockmanagement/BlockInfoStriped.java   |  7 --
 .../namenode/ErasureCodingZoneManager.java  | 12 +-
 .../hdfs/server/namenode/FSDirectory.java   |  6 ++---
 .../hdfs/server/namenode/FSEditLogLoader.java   | 14 +++-
 .../hdfs/server/namenode/FSImageFormat.java |  4 +---
 .../server/namenode/FSImageSerialization.java   | 13 +--
 .../blockmanagement/TestBlockInfoStriped.java   | 23 ++--
 .../hdfs/server/namenode/TestFSImage.java   |  2 +-
 8 files changed, 31 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7541921c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index 9f2f5ba..23e3153 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -244,13 +244,6 @@ public class BlockInfoStriped extends BlockInfo {
 return num;
   }
 
-  @Override
-  public void write(DataOutput out) throws IOException {
-out.writeShort(dataBlockNum);
-out.writeShort(parityBlockNum);
-super.write(out);
-  }
-
   /**
* Convert a complete block to an under construction block.
* @return BlockInfoUnderConstruction -  an under construction block.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7541921c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
index 0a84083..3f94227 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
@@ -54,10 +54,6 @@ public class ErasureCodingZoneManager {
 this.dir = dir;
   }
 
-  boolean getECPolicy(INodesInPath iip) throws IOException {
-return getECSchema(iip) != null;
-  }
-
   ECSchema getECSchema(INodesInPath iip) throws IOException {
 ECZoneInfo ecZoneInfo = getECZoneInfo(iip);
 return ecZoneInfo == null ? null : ecZoneInfo.getSchema();
@@ -109,7 +105,7 @@ public class ErasureCodingZoneManager {
   throw new IOException(Attempt to create an erasure coding zone  +
   for a file.);
 }
-if (getECPolicy(srcIIP)) {
+if (getECSchema(srcIIP) != null) {
   throw new IOException(Directory  + src +  is already in an  +
   erasure coding zone.);
 }
@@ -132,8 +128,10 @@ public class ErasureCodingZoneManager {
   void checkMoveValidity(INodesInPath srcIIP, INodesInPath dstIIP, String src)
   throws IOException {
 assert dir.hasReadLock();
-if (getECPolicy(srcIIP)
-!= getECPolicy(dstIIP)) {
+final ECSchema srcSchema = getECSchema(srcIIP);
+final ECSchema dstSchema = getECSchema(dstIIP);
+if ((srcSchema != null  !srcSchema.equals(dstSchema)) ||
+(dstSchema != null  !dstSchema.equals(srcSchema))) {
   throw new IOException(
   src +  can't be moved because the source and destination have  +
   different erasure coding policies.);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7541921c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 

[23/50] hadoop git commit: HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of datastreamer threads. Contributed by Rakesh R.

2015-05-18 Thread zhz
HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of 
datastreamer threads. Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf2d0ac7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf2d0ac7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf2d0ac7

Branch: refs/heads/HDFS-7285
Commit: bf2d0ac717768e1cf8592e22fe278b7867381423
Parents: 6dcb9b1
Author: Zhe Zhang z...@apache.org
Authored: Thu Apr 30 00:13:32 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:50 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +++
 .../org/apache/hadoop/hdfs/DFSStripedOutputStream.java  | 12 ++--
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf2d0ac7/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index ca60487..3c75152 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -149,3 +149,6 @@
 
 HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil.
 (Zhe Zhang)
+
+HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of 
+datastreamer threads. (Rakesh R via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf2d0ac7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index c930187..5e2a534 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -331,18 +331,26 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
   // interrupt datastreamer if force is true
   @Override
   protected void closeThreads(boolean force) throws IOException {
+int index = 0;
+boolean exceptionOccurred = false;
 for (StripedDataStreamer streamer : streamers) {
   try {
 streamer.close(force);
 streamer.join();
 streamer.closeSocket();
-  } catch (InterruptedException e) {
-throw new IOException(Failed to shutdown streamer);
+  } catch (InterruptedException | IOException e) {
+DFSClient.LOG.error(Failed to shutdown streamer: name=
++ streamer.getName() + , index= + index + , file= + src, e);
+exceptionOccurred = true;
   } finally {
 streamer.setSocketToNull();
 setClosed();
+index++;
   }
 }
+if (exceptionOccurred) {
+  throw new IOException(Failed to shutdown streamer);
+}
   }
 
   /**



[34/50] hadoop git commit: HDFS-8203. Erasure Coding: Seek and other Ops in DFSStripedInputStream. Contributed by Yi Liu.

2015-05-18 Thread zhz
HDFS-8203. Erasure Coding: Seek and other Ops in DFSStripedInputStream. 
Contributed by Yi Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/282349ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/282349ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/282349ec

Branch: refs/heads/HDFS-7285
Commit: 282349eccf5978ed220af604caf53be13e808d22
Parents: 2bd8348
Author: Jing Zhao ji...@apache.org
Authored: Thu May 7 11:06:40 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:00 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../hadoop/hdfs/DFSStripedInputStream.java  | 88 +---
 .../hadoop/hdfs/TestWriteReadStripedFile.java   | 83 +++---
 3 files changed, 151 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/282349ec/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 11e8376..fed08e1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -186,3 +186,6 @@
 
 HDFS-8129. Erasure Coding: Maintain consistent naming for Erasure Coding 
related classes - EC/ErasureCoding
 (umamahesh)
+
+HDFS-8203. Erasure Coding: Seek and other Ops in DFSStripedInputStream.
+(Yi Liu via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/282349ec/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 7cb7b6d..9011192 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -19,10 +19,13 @@ package org.apache.hadoop.hdfs;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.fs.ChecksumException;
+import org.apache.hadoop.fs.ReadOption;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.*;
 import 
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.apache.hadoop.io.ByteBufferPool;
+
 import static org.apache.hadoop.hdfs.util.StripedBlockUtil.ReadPortion;
 import static org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions;
 
@@ -31,9 +34,11 @@ import org.apache.htrace.Span;
 import org.apache.htrace.Trace;
 import org.apache.htrace.TraceScope;
 
+import java.io.EOFException;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
+import java.util.EnumSet;
 import java.util.Set;
 import java.util.Map;
 import java.util.HashMap;
@@ -263,6 +268,10 @@ public class DFSStripedInputStream extends DFSInputStream {
   }
 
   private long getOffsetInBlockGroup() {
+return getOffsetInBlockGroup(pos);
+  }
+
+  private long getOffsetInBlockGroup(long pos) {
 return pos - currentLocatedBlock.getStartOffset();
   }
 
@@ -278,18 +287,22 @@ public class DFSStripedInputStream extends DFSInputStream 
{
 // compute stripe range based on pos
 final long offsetInBlockGroup = getOffsetInBlockGroup();
 final long stripeLen = cellSize * dataBlkNum;
-int stripeIndex = (int) (offsetInBlockGroup / stripeLen);
-curStripeRange = new StripeRange(stripeIndex * stripeLen,
-Math.min(currentLocatedBlock.getBlockSize() - (stripeIndex * 
stripeLen),
-stripeLen));
-final int numCell = (int) ((curStripeRange.length - 1) / cellSize + 1);
+final int stripeIndex = (int) (offsetInBlockGroup / stripeLen);
+final int stripeBufOffset = (int) (offsetInBlockGroup % stripeLen);
+final int stripeLimit = (int) Math.min(currentLocatedBlock.getBlockSize()
+- (stripeIndex * stripeLen), stripeLen);
+curStripeRange = new StripeRange(offsetInBlockGroup,
+stripeLimit - stripeBufOffset);
+
+final int startCell = stripeBufOffset / cellSize;
+final int numCell = (stripeLimit - 1) / cellSize + 1;
 
 // read the whole stripe in parallel
 MapFutureInteger, Integer futures = new HashMap();
-for (int i = 0; i  numCell; i++) {
-  curStripeBuf.position(cellSize * i);
-  curStripeBuf.limit((int) Math.min(cellSize * (i + 1),
-  curStripeRange.length));
+for (int i 

[45/50] hadoop git commit: HDFS-8367 BlockInfoStriped uses EC schema. Contributed by Kai Sasaki

2015-05-18 Thread zhz
HDFS-8367 BlockInfoStriped uses EC schema. Contributed by Kai Sasaki


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1437ab85
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1437ab85
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1437ab85

Branch: refs/heads/HDFS-7285
Commit: 1437ab85a79ecf2c0e33a9a24bfba4e26a5058e7
Parents: 5bd02f8
Author: Kai Zheng kai.zh...@intel.com
Authored: Tue May 19 00:10:30 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:03 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  2 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  6 +--
 .../blockmanagement/BlockInfoStriped.java   | 24 
 .../BlockInfoStripedUnderConstruction.java  | 12 +++---
 .../hdfs/server/namenode/FSDirWriteFileOp.java  |  4 +-
 .../hdfs/server/namenode/FSDirectory.java   |  3 ++
 .../hdfs/server/namenode/FSEditLogLoader.java   | 34 +
 .../hdfs/server/namenode/FSImageFormat.java | 10 +++--
 .../server/namenode/FSImageFormatPBINode.java   |  7 +++-
 .../server/namenode/FSImageSerialization.java   | 14 ---
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 .../blockmanagement/TestBlockInfoStriped.java   |  8 +++-
 .../server/namenode/TestFSEditLogLoader.java|  8 +++-
 .../hdfs/server/namenode/TestFSImage.java   |  6 ++-
 .../server/namenode/TestStripedINodeFile.java   | 39 ++--
 15 files changed, 99 insertions(+), 80 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1437ab85/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 1456434..333d85f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -215,3 +215,5 @@
 
 HDFS-8391. NN should consider current EC tasks handling count from DN 
while 
 assigning new tasks. (umamahesh)
+
+HDFS-8367. BlockInfoStriped uses EC schema. (Kai Sasaki via Kai Zheng)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1437ab85/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 94b2ff9..a6a356c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -203,6 +203,7 @@ import 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.common.StorageInfo;
 import org.apache.hadoop.hdfs.server.namenode.CheckpointSignature;
+import org.apache.hadoop.hdfs.server.namenode.ErasureCodingSchemaManager;
 import org.apache.hadoop.hdfs.server.protocol.BalancerBandwidthCommand;
 import org.apache.hadoop.hdfs.server.protocol.BlockCommand;
 import org.apache.hadoop.hdfs.server.protocol.BlockECRecoveryCommand;
@@ -445,9 +446,8 @@ public class PBHelper {
 return new Block(b.getBlockId(), b.getNumBytes(), b.getGenStamp());
   }
 
-  public static BlockInfoStriped convert(StripedBlockProto p) {
-return new BlockInfoStriped(convert(p.getBlock()),
-(short) p.getDataBlockNum(), (short) p.getParityBlockNum());
+  public static BlockInfoStriped convert(StripedBlockProto p, ECSchema schema) 
{
+return new BlockInfoStriped(convert(p.getBlock()), schema);
   }
 
   public static StripedBlockProto convert(BlockInfoStriped blk) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1437ab85/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index f0e52e3..d7a48a0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -19,7 +19,9 @@ package 

[28/50] hadoop git commit: HDFS-8242. Erasure Coding: XML based end-to-end test for ECCli commands (Contributed by Rakesh R)

2015-05-18 Thread zhz
HDFS-8242. Erasure Coding: XML based end-to-end test for ECCli commands 
(Contributed by Rakesh R)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/051d439a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/051d439a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/051d439a

Branch: refs/heads/HDFS-7285
Commit: 051d439ad38909725c4d55ba4c4afa7654f9
Parents: e85cd18
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue May 5 11:54:30 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:51 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hdfs/tools/erasurecode/ECCommand.java   |   9 +-
 .../hadoop/cli/CLITestCmdErasureCoding.java |  38 +++
 .../apache/hadoop/cli/TestErasureCodingCLI.java | 114 +++
 .../cli/util/CLICommandErasureCodingCli.java|  21 ++
 .../cli/util/ErasureCodingCliCmdExecutor.java   |  37 ++
 .../test/resources/testErasureCodingConf.xml| 342 +++
 7 files changed, 561 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/051d439a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index faec023..ef760fc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -166,3 +166,6 @@
 (jing9)
 
 HDFS-8137. Send the EC schema to DataNode via EC encoding/recovering 
command(umamahesh)
+
+HDFS-8242. Erasure Coding: XML based end-to-end test for ECCli commands
+(Rakesh R via vinayakumarb)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/051d439a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
index 84c2275..802a46d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
@@ -17,7 +17,9 @@
 package org.apache.hadoop.hdfs.tools.erasurecode;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.LinkedList;
+import java.util.List;
 
 import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -120,11 +122,12 @@ public abstract class ECCommand extends Command {
 sb.append(Schema ');
 sb.append(schemaName);
 sb.append(' does not match any of the supported schemas.);
-sb.append(Please select any one of [);
+sb.append( Please select any one of );
+ListString schemaNames = new ArrayListString();
 for (ECSchema ecSchema : ecSchemas) {
-  sb.append(ecSchema.getSchemaName());
-  sb.append(, );
+  schemaNames.add(ecSchema.getSchemaName());
 }
+sb.append(schemaNames);
 throw new HadoopIllegalArgumentException(sb.toString());
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/051d439a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdErasureCoding.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdErasureCoding.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdErasureCoding.java
new file mode 100644
index 000..6c06a8d
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/CLITestCmdErasureCoding.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * p/
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * p/
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either 

[33/50] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts in the branch when merging trunk changes (this commit is for HDFS-8327 and HDFS-8357). Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-7936. Erasure coding: resolving conflicts in the branch when merging trunk 
changes (this commit is for HDFS-8327 and HDFS-8357). Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/206ad7f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/206ad7f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/206ad7f3

Branch: refs/heads/HDFS-7285
Commit: 206ad7f3be5cc05f30b9b0723b24b0c39160850c
Parents: 053da55
Author: Zhe Zhang z...@apache.org
Authored: Mon May 11 12:22:12 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:00 2015 -0700

--
 .../hdfs/server/blockmanagement/BlockInfo.java  | 12 +--
 .../blockmanagement/BlockInfoContiguous.java| 38 
 .../server/blockmanagement/BlockManager.java|  4 +--
 .../erasurecode/ErasureCodingWorker.java|  3 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 10 ++
 .../server/namenode/TestStripedINodeFile.java   |  8 ++---
 .../namenode/TestTruncateQuotaUpdate.java   |  3 +-
 7 files changed, 23 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/206ad7f3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index aebfbb1..61068b9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -88,13 +88,21 @@ public abstract class BlockInfo extends Block
   BlockInfo getPrevious(int index) {
 assert this.triplets != null : BlockInfo is not initialized;
 assert index = 0  index*3+1  triplets.length : Index is out of bound;
-return (BlockInfo) triplets[index*3+1];
+BlockInfo info = (BlockInfo)triplets[index*3+1];
+assert info == null ||
+info.getClass().getName().startsWith(BlockInfo.class.getName()) :
+BlockInfo is expected at  + index*3;
+return info;
   }
 
   BlockInfo getNext(int index) {
 assert this.triplets != null : BlockInfo is not initialized;
 assert index = 0  index*3+2  triplets.length : Index is out of bound;
-return (BlockInfo) triplets[index*3+2];
+BlockInfo info = (BlockInfo)triplets[index*3+2];
+assert info == null || info.getClass().getName().startsWith(
+BlockInfo.class.getName()) :
+BlockInfo is expected at  + index*3;
+return info;
   }
 
   void setStorageInfo(int index, DatanodeStorageInfo storage) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/206ad7f3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
index d3051a3..eeab076 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
@@ -47,18 +47,6 @@ public class BlockInfoContiguous extends BlockInfo {
 this.setBlockCollection(from.getBlockCollection());
   }
 
-  public BlockCollection getBlockCollection() {
-return bc;
-  }
-
-  public void setBlockCollection(BlockCollection bc) {
-this.bc = bc;
-  }
-
-  public boolean isDeleted() {
-return (bc == null);
-  }
-
   public DatanodeDescriptor getDatanode(int index) {
 DatanodeStorageInfo storage = getStorageInfo(index);
 return storage == null ? null : storage.getDatanodeDescriptor();
@@ -70,32 +58,6 @@ public class BlockInfoContiguous extends BlockInfo {
 return (DatanodeStorageInfo)triplets[index*3];
   }
 
-  private BlockInfoContiguous getPrevious(int index) {
-assert this.triplets != null : BlockInfo is not initialized;
-assert index = 0  index*3+1  triplets.length : Index is out of bound;
-BlockInfoContiguous info = (BlockInfoContiguous)triplets[index*3+1];
-assert info == null || 
-
info.getClass().getName().startsWith(BlockInfoContiguous.class.getName()) :
-  BlockInfo is expected at  + index*3;
-return info;
-  }
-
-  BlockInfoContiguous 

[01/50] hadoop git commit: HDFS-8146. Protobuf changes for BlockECRecoveryCommand and its fields for making it ready for transfer to DN (Contributed by Uma Maheswara Rao G)

2015-05-18 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 3365da575 - 031b4d09d (forced update)


HDFS-8146. Protobuf changes for BlockECRecoveryCommand and its fields for 
making it ready for transfer to DN (Contributed by Uma Maheswara Rao G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d0bc27dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d0bc27dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d0bc27dc

Branch: refs/heads/HDFS-7285
Commit: d0bc27dc3a32d59be787aee1518d6dece852
Parents: 7541921
Author: Vinayakumar B vinayakum...@apache.org
Authored: Sat Apr 18 23:20:45 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:45 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 137 ++-
 .../blockmanagement/DatanodeDescriptor.java |  31 +
 .../server/blockmanagement/DatanodeManager.java |   4 +-
 .../server/protocol/BlockECRecoveryCommand.java |  80 ++-
 .../hdfs/server/protocol/DatanodeProtocol.java  |   2 +-
 .../src/main/proto/DatanodeProtocol.proto   |   8 ++
 .../src/main/proto/erasurecoding.proto  |  13 ++
 .../hadoop/hdfs/protocolPB/TestPBHelper.java|  88 
 .../namenode/TestRecoverStripedBlocks.java  |  10 +-
 10 files changed, 335 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0bc27dc/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 0ed61cd..40517e7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -87,3 +87,6 @@
 startup. (Hui Zheng via szetszwo)
 
 HDFS-8167. BlockManager.addBlockCollectionWithCheck should check if the 
block is a striped block. (Hui Zheng via zhz).
+
+HDFS-8146. Protobuf changes for BlockECRecoveryCommand and its fields for
+making it ready for transfer to DN (Uma Maheswara Rao G via vinayakumarb)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0bc27dc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 9ca73ae..c127b5f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -28,6 +28,7 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.Collection;
 import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.List;
@@ -100,7 +101,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.AclProtos.AclEntryProto.AclEntryTyp
 import 
org.apache.hadoop.hdfs.protocol.proto.AclProtos.AclEntryProto.FsActionProto;
 import org.apache.hadoop.hdfs.protocol.proto.AclProtos.AclStatusProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.AclProtos.GetAclStatusResponseProto;
-import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos;
+import org.apache.hadoop.hdfs.protocol.proto.*;
 import 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.CacheDirectiveEntryProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.CacheDirectiveInfoExpirationProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.CacheDirectiveInfoProto;
@@ -121,6 +122,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmI
 import 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitShmSlotProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BalancerBandwidthCommandProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockCommandProto;
+import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockECRecoveryCommandProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockIdCommandProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockRecoveryCommandProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.DatanodeCommandProto;
@@ -132,11 +134,11 @@ import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.ReceivedDele
 import 

[09/50] hadoop git commit: HDFS-8156. Add/implement necessary APIs even we just have the system default schema. Contributed by Kai Zheng.

2015-05-18 Thread zhz
HDFS-8156. Add/implement necessary APIs even we just have the system default 
schema. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0df2b787
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0df2b787
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0df2b787

Branch: refs/heads/HDFS-7285
Commit: 0df2b787e840c55f09cb242058e08861b74b51ac
Parents: 999b25f
Author: Zhe Zhang z...@apache.org
Authored: Wed Apr 22 14:48:54 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:47 2015 -0700

--
 .../apache/hadoop/io/erasurecode/ECSchema.java  | 173 +++
 .../hadoop/io/erasurecode/TestECSchema.java |   2 +-
 .../hadoop/io/erasurecode/TestSchemaLoader.java |   6 +-
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |   2 +-
 .../hdfs/server/namenode/ECSchemaManager.java   |  79 -
 .../namenode/ErasureCodingZoneManager.java  |  16 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  29 +++-
 .../org/apache/hadoop/hdfs/TestECSchemas.java   |   5 +-
 .../hadoop/hdfs/TestErasureCodingZones.java |  45 +++--
 10 files changed, 249 insertions(+), 111 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0df2b787/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
index 32077f6..f058ea7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.io.erasurecode;
 
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Map;
 
 /**
@@ -30,55 +31,80 @@ public final class ECSchema {
   public static final String CHUNK_SIZE_KEY = chunkSize;
   public static final int DEFAULT_CHUNK_SIZE = 256 * 1024; // 256K
 
-  private String schemaName;
-  private String codecName;
-  private MapString, String options;
-  private int numDataUnits;
-  private int numParityUnits;
-  private int chunkSize;
+  /**
+   * A friendly and understandable name that can mean what's it, also serves as
+   * the identifier that distinguish it from other schemas.
+   */
+  private final String schemaName;
+
+  /**
+   * The erasure codec name associated.
+   */
+  private final String codecName;
+
+  /**
+   * Number of source data units coded
+   */
+  private final int numDataUnits;
+
+  /**
+   * Number of parity units generated in a coding
+   */
+  private final int numParityUnits;
+
+  /**
+   * Unit data size for each chunk in a coding
+   */
+  private final int chunkSize;
+
+  /*
+   * An erasure code can have its own specific advanced parameters, subject to
+   * itself to interpret these key-value settings.
+   */
+  private final MapString, String extraOptions;
 
   /**
-   * Constructor with schema name and provided options. Note the options may
+   * Constructor with schema name and provided all options. Note the options 
may
* contain additional information for the erasure codec to interpret further.
* @param schemaName schema name
-   * @param options schema options
+   * @param allOptions all schema options
*/
-  public ECSchema(String schemaName, MapString, String options) {
+  public ECSchema(String schemaName, MapString, String allOptions) {
 assert (schemaName != null  ! schemaName.isEmpty());
 
 this.schemaName = schemaName;
 
-if (options == null || options.isEmpty()) {
+if (allOptions == null || allOptions.isEmpty()) {
   throw new IllegalArgumentException(No schema options are provided);
 }
 
-String codecName = options.get(CODEC_NAME_KEY);
+this.codecName = allOptions.get(CODEC_NAME_KEY);
 if (codecName == null || codecName.isEmpty()) {
   throw new IllegalArgumentException(No codec option is provided);
 }
 
-int dataUnits = 0, parityUnits = 0;
-try {
-  if (options.containsKey(NUM_DATA_UNITS_KEY)) {
-dataUnits = Integer.parseInt(options.get(NUM_DATA_UNITS_KEY));
-  }
-} catch (NumberFormatException e) {
-  throw new IllegalArgumentException(Option value  +
-  options.get(NUM_DATA_UNITS_KEY) +  for  + NUM_DATA_UNITS_KEY +
-   is found. It should be an integer);
+int tmpNumDataUnits = extractIntOption(NUM_DATA_UNITS_KEY, allOptions);
+int tmpNumParityUnits = extractIntOption(NUM_PARITY_UNITS_KEY, allOptions);
+ 

[30/50] hadoop git commit: HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. Contributed by Yi Liu.

2015-05-18 Thread zhz
HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. Contributed by 
Yi Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0b00fe86
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0b00fe86
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0b00fe86

Branch: refs/heads/HDFS-7285
Commit: 0b00fe8605d2b6ad60274b1aa3a8e68c88a39281
Parents: b791cdd
Author: Zhe Zhang z...@apache.org
Authored: Tue May 5 16:33:56 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:52 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/BlockReader.java |   6 +
 .../apache/hadoop/hdfs/BlockReaderLocal.java|   5 +
 .../hadoop/hdfs/BlockReaderLocalLegacy.java |   5 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   6 +
 .../java/org/apache/hadoop/hdfs/DFSPacket.java  |  10 +-
 .../apache/hadoop/hdfs/RemoteBlockReader.java   |   5 +
 .../apache/hadoop/hdfs/RemoteBlockReader2.java  |   5 +
 .../hadoop/hdfs/server/datanode/DNConf.java |  27 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |  31 +-
 .../erasurecode/ErasureCodingWorker.java| 893 ++-
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  49 +-
 .../src/main/resources/hdfs-default.xml |  31 +-
 .../hadoop/hdfs/TestRecoverStripedFile.java | 356 
 14 files changed, 1377 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b00fe86/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 7efaa5a..0d2d448 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -175,3 +175,6 @@
 
 HDFS-7672. Handle write failure for stripping blocks and refactor the
 existing code in DFSStripedOutputStream and StripedDataStreamer.  
(szetszwo)
+
+HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. 
+(Yi Liu via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b00fe86/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
index aa3e8ba..0a5511e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.ByteBufferReadable;
 import org.apache.hadoop.fs.ReadOption;
 import org.apache.hadoop.hdfs.shortcircuit.ClientMmap;
+import org.apache.hadoop.util.DataChecksum;
 
 /**
  * A BlockReader is responsible for reading a single block
@@ -99,4 +100,9 @@ public interface BlockReader extends ByteBufferReadable {
*  supported.
*/
   ClientMmap getClientMmap(EnumSetReadOption opts);
+
+  /**
+   * @return  The DataChecksum used by the read block
+   */
+  DataChecksum getDataChecksum();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b00fe86/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
index d913f3a..0b2420d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
@@ -738,4 +738,9 @@ class BlockReaderLocal implements BlockReader {
   void forceUnanchorable() {
 replica.getSlot().makeUnanchorable();
   }
+
+  @Override
+  public DataChecksum getDataChecksum() {
+return checksum;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b00fe86/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
index c16ffdf..04cf733 100644
--- 

[10/50] hadoop git commit: HDFS-8212. DistributedFileSystem.createErasureCodingZone should pass schema in FileSystemLinkResolver. Contributed by Tsz Wo Nicholas Sze.

2015-05-18 Thread zhz
HDFS-8212. DistributedFileSystem.createErasureCodingZone should pass schema in 
FileSystemLinkResolver. Contributed by Tsz Wo Nicholas Sze.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/656e841b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/656e841b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/656e841b

Branch: refs/heads/HDFS-7285
Commit: 656e841b44149e03c4cd869dfbec459ba843d65f
Parents: 25f5300
Author: Zhe Zhang z...@apache.org
Authored: Tue Apr 21 21:03:07 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:47 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  | 3 +++
 .../main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java   | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/656e841b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index d8f2e9d..3d86f05 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -110,3 +110,6 @@
 
 HDFS-8216. TestDFSStripedOutputStream should use BlockReaderTestUtil to 
 create BlockReader. (szetszwo via Zhe Zhang)
+
+HDFS-8212. DistributedFileSystem.createErasureCodingZone should pass schema
+in FileSystemLinkResolver. (szetszwo via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/656e841b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 4c8fff3..ede4f48 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2281,7 +2281,7 @@ public class DistributedFileSystem extends FileSystem {
   @Override
   public Void doCall(final Path p) throws IOException,
   UnresolvedLinkException {
-dfs.createErasureCodingZone(getPathName(p), null);
+dfs.createErasureCodingZone(getPathName(p), schema);
 return null;
   }
 



[24/50] hadoop git commit: HDFS-8316. Erasure coding: refactor EC constants to be consistent with HDFS-8249. Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-8316. Erasure coding: refactor EC constants to be consistent with 
HDFS-8249. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ad183e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ad183e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ad183e8

Branch: refs/heads/HDFS-7285
Commit: 2ad183e8005009169d6685213ff358981410de79
Parents: bdd264f
Author: Jing Zhao ji...@apache.org
Authored: Mon May 4 11:24:35 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:50 2015 -0700

--
 .../org/apache/hadoop/hdfs/protocol/HdfsConstants.java   | 11 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt |  3 +++
 .../org/apache/hadoop/hdfs/DFSStripedOutputStream.java   |  2 +-
 .../hdfs/server/blockmanagement/BlockIdManager.java  |  4 ++--
 .../blockmanagement/SequentialBlockGroupIdGenerator.java |  4 ++--
 .../hadoop/hdfs/server/common/HdfsServerConstants.java   |  5 -
 .../hdfs/server/namenode/TestAddStripedBlocks.java   |  4 ++--
 .../hdfs/server/namenode/TestStripedINodeFile.java   |  6 +++---
 8 files changed, 28 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ad183e8/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 58c7ea1..32ca81c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -75,6 +75,17 @@ public final class HdfsConstants {
   public static final String CLIENT_NAMENODE_PROTOCOL_NAME =
   org.apache.hadoop.hdfs.protocol.ClientProtocol;
 
+  /*
+   * These values correspond to the values used by the system default erasure
+   * coding schema.
+   * TODO: to be removed once all places use schema.
+   */
+
+  public static final byte NUM_DATA_BLOCKS = 6;
+  public static final byte NUM_PARITY_BLOCKS = 3;
+  // The chunk size for striped block which is used by erasure coding
+  public static final int BLOCK_STRIPED_CELL_SIZE = 256 * 1024;
+
   // SafeMode actions
   public enum SafeModeAction {
 SAFEMODE_LEAVE, SAFEMODE_ENTER, SAFEMODE_GET

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ad183e8/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 145494f..e30b2ed 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -158,3 +158,6 @@
 
 HDFS-7949. WebImageViewer need support file size calculation with striped 
 blocks. (Rakesh R via Zhe Zhang)
+
+HDFS-8316. Erasure coding: refactor EC constants to be consistent with 
HDFS-8249.
+(Zhe Zhang via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ad183e8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index 5e2a534..71cdbb9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -419,7 +419,7 @@ public class DFSStripedOutputStream extends DFSOutputStream 
{
   @Override
   protected synchronized void closeImpl() throws IOException {
 if (isClosed()) {
-  getLeadingStreamer().getLastException().check();
+  getLeadingStreamer().getLastException().check(true);
   return;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ad183e8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 

[43/50] hadoop git commit: HDFS-8195. Erasure coding: Fix file quota change when we complete/commit the striped blocks. Contributed by Takuya Fukudome.

2015-05-18 Thread zhz
HDFS-8195. Erasure coding: Fix file quota change when we complete/commit the 
striped blocks. Contributed by Takuya Fukudome.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/139c0a9a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/139c0a9a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/139c0a9a

Branch: refs/heads/HDFS-7285
Commit: 139c0a9a26960c1153fbfd6351214149d9ea8487
Parents: cc5b07e
Author: Zhe Zhang z...@apache.org
Authored: Tue May 12 23:10:25 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:02 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hdfs/server/namenode/FSDirectory.java   |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  25 +++-
 .../namenode/TestQuotaWithStripedBlocks.java| 125 +++
 4 files changed, 151 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/139c0a9a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 0a2bb9e..0945d72 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -206,3 +206,6 @@
 handled properly (Rakesh R via zhz)
 
 HDFS-8363. Erasure Coding: DFSStripedInputStream#seekToNewSource. (yliu)
+
+HDFS-8195. Erasure coding: Fix file quota change when we complete/commit 
+the striped blocks. (Takuya Fukudome via zhz)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/139c0a9a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 3f619ff..f879fb9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -619,7 +619,7 @@ public class FSDirectory implements Closeable {
 final INodeFile fileINode = iip.getLastINode().asFile();
 EnumCountersStorageType typeSpaceDeltas =
   getStorageTypeDeltas(fileINode.getStoragePolicyID(), ssDelta,
-  replication, replication);;
+  replication, replication);
 updateCount(iip, iip.length() - 1,
   new QuotaCounts.Builder().nameSpace(nsDelta).storageSpace(ssDelta * 
replication).
   typeSpaces(typeSpaceDeltas).build(),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/139c0a9a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 58fcf6a..9a27105 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -3856,11 +3856,30 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 }
 
 // Adjust disk space consumption if required
-// TODO: support EC files
-final long diff = fileINode.getPreferredBlockSize() - 
commitBlock.getNumBytes();
+final long diff;
+final short replicationFactor;
+if (fileINode.isStriped()) {
+  final ECSchema ecSchema = dir.getECSchema(iip);
+  final short numDataUnits = (short) ecSchema.getNumDataUnits();
+  final short numParityUnits = (short) ecSchema.getNumParityUnits();
+
+  final long numBlocks = numDataUnits + numParityUnits;
+  final long fullBlockGroupSize =
+  fileINode.getPreferredBlockSize() * numBlocks;
+
+  final BlockInfoStriped striped = new BlockInfoStriped(commitBlock,
+  numDataUnits, numParityUnits);
+  final long actualBlockGroupSize = striped.spaceConsumed();
+
+  diff = fullBlockGroupSize - actualBlockGroupSize;
+  replicationFactor = (short) 1;
+} else {
+  diff = fileINode.getPreferredBlockSize() - commitBlock.getNumBytes();
+  replicationFactor = fileINode.getFileReplication();
+}
 if (diff  0) {
   try {
-dir.updateSpaceConsumed(iip, 0, -diff, 

[06/50] hadoop git commit: HDFS-8188. Erasure coding: refactor client-related code to sync with HDFS-8082 and HDFS-8169. Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-8188. Erasure coding: refactor client-related code to sync with HDFS-8082 
and HDFS-8169. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/936547dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/936547dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/936547dc

Branch: refs/heads/HDFS-7285
Commit: 936547dcc58e9a426932b09fa89f79902084af7d
Parents: 4c37b05
Author: Zhe Zhang z...@apache.org
Authored: Mon Apr 20 14:19:12 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:46 2015 -0700

--
 .../hdfs/client/HdfsClientConfigKeys.java   | 12 
 .../hdfs/protocol/LocatedStripedBlock.java  | 64 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 21 ++
 .../hadoop/hdfs/client/impl/DfsClientConf.java  | 21 +-
 .../hdfs/protocol/LocatedStripedBlock.java  | 73 
 .../server/blockmanagement/BlockManager.java| 25 ---
 .../server/namenode/TestStripedINodeFile.java   |  3 +-
 7 files changed, 119 insertions(+), 100 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/936547dc/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 26283aa..6006d71 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -177,6 +177,18 @@ public interface HdfsClientConfigKeys {
 int THREADPOOL_SIZE_DEFAULT = 0;
   }
 
+  /** dfs.client.read.striped configuration properties */
+  interface StripedRead {
+String PREFIX = Read.PREFIX + striped.;
+
+String  THREADPOOL_SIZE_KEY = PREFIX + threadpool.size;
+/**
+ * With default 6+3 schema, each normal read could span 6 DNs. So this
+ * default value accommodates 3 read streams
+ */
+int THREADPOOL_SIZE_DEFAULT = 18;
+  }
+
   /** dfs.http.client configuration properties */
   interface HttpClient {
 String  PREFIX = dfs.http.client.;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/936547dc/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
new file mode 100644
index 000..93a5948
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.StorageType;
+
+import java.util.Arrays;
+
+/**
+ * {@link LocatedBlock} with striped block support. For a striped block, each
+ * datanode storage is associated with a block in the block group. We need to
+ * record the index (in the striped block group) for each of them.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Evolving
+public class LocatedStripedBlock extends LocatedBlock {
+  private int[] blockIndices;
+
+  public LocatedStripedBlock(ExtendedBlock b, DatanodeInfo[] locs,
+  String[] storageIDs, StorageType[] storageTypes, int[] indices,
+  long startOffset, boolean corrupt, DatanodeInfo[] cachedLocs) {
+super(b, locs, storageIDs, 

[42/50] hadoop git commit: Merge HDFS-8394 from trunk: Move getAdditionalBlock() and related functionalities into a separate class.

2015-05-18 Thread zhz
Merge HDFS-8394 from trunk: Move getAdditionalBlock() and related 
functionalities into a separate class.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5bd02f84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5bd02f84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5bd02f84

Branch: refs/heads/HDFS-7285
Commit: 5bd02f8412f85a67faf7c6bb0826707202b8999f
Parents: 764e16d
Author: Jing Zhao ji...@apache.org
Authored: Sat May 16 16:57:12 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:02 2015 -0700

--
 .../blockmanagement/BlockInfoContiguous.java|   2 +-
 .../server/blockmanagement/BlockManager.java|   8 +-
 .../hdfs/server/namenode/FSDirWriteFileOp.java  | 120 +++
 .../hdfs/server/namenode/FSNamesystem.java  |   6 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  |   8 --
 .../hadoop/hdfs/util/StripedBlockUtil.java  |   2 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |   2 +-
 7 files changed, 81 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5bd02f84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
index eeab076..416091f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
@@ -42,7 +42,7 @@ public class BlockInfoContiguous extends BlockInfo {
* @param from BlockReplicationInfo to copy from.
*/
   protected BlockInfoContiguous(BlockInfoContiguous from) {
-this(from, from.getBlockCollection().getBlockReplication());
+this(from, from.getBlockCollection().getPreferredBlockReplication());
 this.triplets = new Object[from.triplets.length];
 this.setBlockCollection(from.getBlockCollection());
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5bd02f84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 2e7855e..9cdfa05 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -3560,6 +3560,11 @@ public class BlockManager {
 return storages;
   }
 
+  /** @return an iterator of the datanodes. */
+  public IterableDatanodeStorageInfo getStorages(final Block block) {
+return blocksMap.getStorages(block);
+  }
+
   public int getTotalBlocks() {
 return blocksMap.size();
   }
@@ -3951,7 +3956,7 @@ public class BlockManager {
 null);
   }
 
-  public LocatedBlock newLocatedBlock(ExtendedBlock eb, BlockInfo info,
+  public static LocatedBlock newLocatedBlock(ExtendedBlock eb, BlockInfo info,
   DatanodeStorageInfo[] locs, long offset) throws IOException {
 final LocatedBlock lb;
 if (info.isStriped()) {
@@ -3961,7 +3966,6 @@ public class BlockManager {
 } else {
   lb = newLocatedBlock(eb, locs, offset, false);
 }
-setBlockToken(lb, BlockTokenIdentifier.AccessMode.WRITE);
 return lb;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5bd02f84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
index 1ff0899..324cc16 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
@@ -26,12 +26,15 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import 

[14/50] hadoop git commit: HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may cause block id conflicts. Contributed by Jing Zhao.

2015-05-18 Thread zhz
HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may cause 
block id conflicts. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/98ebb4ed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/98ebb4ed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/98ebb4ed

Branch: refs/heads/HDFS-7285
Commit: 98ebb4eda1916e1db5057796e6fa075781366214
Parents: b4a33f2
Author: Zhe Zhang z...@apache.org
Authored: Fri Apr 24 09:30:38 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:48 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 ++
 .../SequentialBlockGroupIdGenerator.java| 39 +++---
 .../SequentialBlockIdGenerator.java |  2 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 57 +++-
 .../server/namenode/TestAddStripedBlocks.java   | 21 
 5 files changed, 77 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/98ebb4ed/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 9357e23..cf41a9b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -128,3 +128,6 @@
 
 HDFS-8223. Should calculate checksum for parity blocks in 
DFSStripedOutputStream.
 (Yi Liu via jing9)
+
+HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may 
cause 
+block id conflicts (Jing Zhao via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/98ebb4ed/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
index e9e22ee..de8e379 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
@@ -19,9 +19,11 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.util.SequentialNumber;
 
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_GROUP_INDEX_MASK;
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.MAX_BLOCKS_IN_GROUP;
+
 /**
  * Generate the next valid block group ID by incrementing the maximum block
  * group ID allocated so far, with the first 2^10 block group IDs reserved.
@@ -34,6 +36,9 @@ import org.apache.hadoop.util.SequentialNumber;
  * bits (n+2) to (64-m) represent the ID of its block group, while the last m
  * bits represent its index of the group. The value m is determined by the
  * maximum number of blocks in a group (MAX_BLOCKS_IN_GROUP).
+ *
+ * Note that the {@link #nextValue()} methods requires external lock to
+ * guarantee IDs have no conflicts.
  */
 @InterfaceAudience.Private
 public class SequentialBlockGroupIdGenerator extends SequentialNumber {
@@ -47,32 +52,30 @@ public class SequentialBlockGroupIdGenerator extends 
SequentialNumber {
 
   @Override // NumberGenerator
   public long nextValue() {
-// Skip to next legitimate block group ID based on the naming protocol
-while (super.getCurrentValue() % HdfsConstants.MAX_BLOCKS_IN_GROUP  0) {
-  super.nextValue();
-}
+skipTo((getCurrentValue()  ~BLOCK_GROUP_INDEX_MASK) + 
MAX_BLOCKS_IN_GROUP);
 // Make sure there's no conflict with existing random block IDs
-while (hasValidBlockInRange(super.getCurrentValue())) {
-  super.skipTo(super.getCurrentValue() +
-  HdfsConstants.MAX_BLOCKS_IN_GROUP);
+final Block b = new Block(getCurrentValue());
+while (hasValidBlockInRange(b)) {
+  skipTo(getCurrentValue() + MAX_BLOCKS_IN_GROUP);
+  b.setBlockId(getCurrentValue());
 }
-if (super.getCurrentValue() = 0) {
-  BlockManager.LOG.warn(All negative block group IDs are used,  +
-  growing into positive IDs,  +
-  which might conflict with non-erasure coded blocks.);
+if (b.getBlockId() = 0) {
+  throw new IllegalStateException(All 

[41/50] hadoop git commit: HDFS-8391. NN should consider current EC tasks handling count from DN while assigning new tasks. Contributed by Uma Maheswara Rao G.

2015-05-18 Thread zhz
HDFS-8391. NN should consider current EC tasks handling count from DN while 
assigning new tasks. Contributed by Uma Maheswara Rao G.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/764e16d5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/764e16d5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/764e16d5

Branch: refs/heads/HDFS-7285
Commit: 764e16d5299284e9955b56af4c15b2e9c47bb927
Parents: 8bc4adb
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Thu May 14 11:27:48 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:02 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt |  3 +++
 .../hadoop/hdfs/server/datanode/DataNode.java| 19 +--
 .../erasurecode/ErasureCodingWorker.java |  4 +++-
 3 files changed, 23 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/764e16d5/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 190ddd6..1456434 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -212,3 +212,6 @@
 
 HDFS-8364. Erasure coding: fix some minor bugs in EC CLI
 (Walter Su via vinayakumarb)
+
+HDFS-8391. NN should consider current EC tasks handling count from DN 
while 
+assigning new tasks. (umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/764e16d5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 5eca2c7..a1a80ee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1909,6 +1909,21 @@ public class DataNode extends ReconfigurableBase
   int getXmitsInProgress() {
 return xmitsInProgress.get();
   }
+  
+  /**
+   * Increments the xmitsInProgress count. xmitsInProgress count represents the
+   * number of data replication/reconstruction tasks running currently.
+   */
+  public void incrementXmitsInProgress() {
+xmitsInProgress.getAndIncrement();
+  }
+
+  /**
+   * Decrements the xmitsInProgress count
+   */
+  public void decrementXmitsInProgress() {
+xmitsInProgress.getAndDecrement();
+  }
 
   private void reportBadBlock(final BPOfferService bpos,
   final ExtendedBlock block, final String msg) {
@@ -2128,7 +2143,7 @@ public class DataNode extends ReconfigurableBase
  */
 @Override
 public void run() {
-  xmitsInProgress.getAndIncrement();
+  incrementXmitsInProgress();
   Socket sock = null;
   DataOutputStream out = null;
   DataInputStream in = null;
@@ -2207,7 +,7 @@ public class DataNode extends ReconfigurableBase
 // check if there are any disk problem
 checkDiskErrorAsync();
   } finally {
-xmitsInProgress.getAndDecrement();
+decrementXmitsInProgress();
 IOUtils.closeStream(blockSender);
 IOUtils.closeStream(out);
 IOUtils.closeStream(in);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/764e16d5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
index eedb191..7b3c24d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
@@ -312,6 +312,7 @@ public final class ErasureCodingWorker {
 
 @Override
 public void run() {
+  datanode.incrementXmitsInProgress();
   try {
 // Store the indices of successfully read source
 // This will be updated after doing real read.
@@ -397,8 +398,9 @@ public final class ErasureCodingWorker {
 // Currently we don't check the acks for packets, this is similar as
   

[20/50] hadoop git commit: HDFS-8189. ClientProtocol#createErasureCodingZone API was wrongly annotated as Idempotent (Contributed by Vinayakumar B)

2015-05-18 Thread zhz
HDFS-8189. ClientProtocol#createErasureCodingZone API was wrongly annotated as 
Idempotent (Contributed by Vinayakumar B)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c1e85dc0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c1e85dc0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c1e85dc0

Branch: refs/heads/HDFS-7285
Commit: c1e85dc0ae719fca5dcf5c0453c68502c5bbdc38
Parents: 6506d7d
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Apr 28 14:24:17 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:49 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  5 -
 .../apache/hadoop/hdfs/protocol/ClientProtocol.java | 16 
 2 files changed, 12 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1e85dc0/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index c28473b..6c5d7ce 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -136,4 +136,7 @@
 striped layout (Zhe Zhang)
 
 HDFS-8230. Erasure Coding: Ignore 
DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY 
-commands from standbynode if any (vinayakumarb)
\ No newline at end of file
+commands from standbynode if any (vinayakumarb)
+
+HDFS-8189. ClientProtocol#createErasureCodingZone API was wrongly annotated
+as Idempotent (vinayakumarb)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1e85dc0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
index bba7697..76e2d12 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
@@ -1364,14 +1364,6 @@ public interface ClientProtocol {
   long prevId) throws IOException;
 
   /**
-   * Create an erasure coding zone with specified schema, if any, otherwise
-   * default
-   */
-  @Idempotent
-  public void createErasureCodingZone(String src, ECSchema schema)
-  throws IOException;
-
-  /**
* Set xattr of a file or directory.
* The name must be prefixed with the namespace followed by .. For example,
* user.attr.
@@ -1467,6 +1459,14 @@ public interface ClientProtocol {
   public EventBatchList getEditsFromTxid(long txid) throws IOException;
 
   /**
+   * Create an erasure coding zone with specified schema, if any, otherwise
+   * default
+   */
+  @AtMostOnce
+  public void createErasureCodingZone(String src, ECSchema schema)
+  throws IOException;
+
+  /**
* Gets the ECInfo for the specified file/directory
* 
* @param src



[25/50] hadoop git commit: HDFS-8324. Add trace info to DFSClient#getErasureCodingZoneInfo(..). Contributed by Vinayakumar B

2015-05-18 Thread zhz
HDFS-8324. Add trace info to DFSClient#getErasureCodingZoneInfo(..). 
Contributed by Vinayakumar B


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/defc4a12
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/defc4a12
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/defc4a12

Branch: refs/heads/HDFS-7285
Commit: defc4a1256fc2959347a4fc226bee279630807ae
Parents: 051d439
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Tue May 5 19:25:21 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:51 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 3 +++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/defc4a12/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index ef760fc..a8df3f2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -169,3 +169,6 @@
 
 HDFS-8242. Erasure Coding: XML based end-to-end test for ECCli commands
 (Rakesh R via vinayakumarb)
+
+HDFS-8324. Add trace info to DFSClient#getErasureCodingZoneInfo(..) 
(vinayakumarb via 
+umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/defc4a12/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 5fb23a0..63c27ef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -3351,11 +3351,14 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
*/
   public ECZoneInfo getErasureCodingZoneInfo(String src) throws IOException {
 checkOpen();
+TraceScope scope = getPathTraceScope(getErasureCodingZoneInfo, src);
 try {
   return namenode.getErasureCodingZoneInfo(src);
 } catch (RemoteException re) {
   throw re.unwrapRemoteException(FileNotFoundException.class,
   AccessControlException.class, UnresolvedPathException.class);
+} finally {
+  scope.close();
 }
   }
 }



[37/50] hadoop git commit: HDFS-7678. Erasure coding: DFSInputStream with decode functionality (pread). Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-7678. Erasure coding: DFSInputStream with decode functionality (pread). 
Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e83d1b8b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e83d1b8b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e83d1b8b

Branch: refs/heads/HDFS-7285
Commit: e83d1b8bb3a2cbb10b6cfa734e0d8f008f4c3aff
Parents: 206ad7f
Author: Zhe Zhang z...@apache.org
Authored: Mon May 11 21:10:23 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:01 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/DFSStripedInputStream.java  | 164 --
 .../erasurecode/ErasureCodingWorker.java|  10 +-
 .../hadoop/hdfs/util/StripedBlockUtil.java  | 517 +--
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  97 +++-
 .../hadoop/hdfs/TestWriteReadStripedFile.java   |  49 ++
 6 files changed, 768 insertions(+), 72 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e83d1b8b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index c7d01c7..0acf746 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -195,3 +195,6 @@
 
 HDFS-8355. Erasure Coding: Refactor BlockInfo and 
BlockInfoUnderConstruction.
 (Tsz Wo Nicholas Sze via jing9)
+
+HDFS-7678. Erasure coding: DFSInputStream with decode functionality 
(pread).
+(Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e83d1b8b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 7425e75..7678fae 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -21,15 +21,27 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.fs.ReadOption;
 import org.apache.hadoop.fs.StorageType;
-import org.apache.hadoop.hdfs.protocol.*;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
 import 
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 import org.apache.hadoop.io.ByteBufferPool;
 
-import static org.apache.hadoop.hdfs.util.StripedBlockUtil.ReadPortion;
 import static org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions;
+import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.divideByteRangeIntoStripes;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.initDecodeInputs;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.decodeAndFillBuffer;
+import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.getNextCompletedStripedRead;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.ReadPortion;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.AlignedStripe;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.StripingChunk;
+import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.StripingChunkReadResult;
 
 import org.apache.hadoop.io.erasurecode.ECSchema;
+
 import org.apache.hadoop.net.NetUtils;
 import org.apache.htrace.Span;
 import org.apache.htrace.Trace;
@@ -37,10 +49,12 @@ import org.apache.htrace.TraceScope;
 
 import java.io.EOFException;
 import java.io.IOException;
+import java.io.InterruptedIOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
 import java.util.EnumSet;
 import java.util.Set;
+import java.util.Collection;
 import java.util.Map;
 import java.util.HashMap;
 import java.util.concurrent.CompletionService;
@@ -51,7 +65,6 @@ import java.util.concurrent.CancellationException;
 import java.util.concurrent.Callable;
 import java.util.concurrent.Future;
 
-
 /**
  * DFSStripedInputStream reads from striped block groups, illustrated below:
  *
@@ -125,6 +138,7 @@ public class 

[05/50] hadoop git commit: HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.

2015-05-18 Thread zhz
HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aaab49dd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aaab49dd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aaab49dd

Branch: refs/heads/HDFS-7285
Commit: aaab49dd0515fb2dc89d66900343bc7ee060daca
Parents: 936547d
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Mon Apr 20 17:42:02 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:46 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  61 ---
 .../hadoop/hdfs/TestDFSStripedOutputStream.java | 178 +++
 3 files changed, 100 insertions(+), 142 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aaab49dd/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index c8dbf08..8f28285 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -104,3 +104,6 @@
 
 HDFS-8181. createErasureCodingZone sets retryCache state as false always
 (Uma Maheswara Rao G via vinayakumarb)
+
+HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.
+(szetszwo)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aaab49dd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
index 2368021..d622d4d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
@@ -25,6 +25,8 @@ import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
 
+import com.google.common.base.Preconditions;
+
 /**
  * Utility class for analyzing striped block groups
  */
@@ -81,46 +83,43 @@ public class StripedBlockUtil {
   /**
* Get the size of an internal block at the given index of a block group
*
-   * @param numBytesInGroup Size of the block group only counting data blocks
+   * @param dataSize Size of the block group only counting data blocks
* @param cellSize The size of a striping cell
-   * @param dataBlkNum The number of data blocks
-   * @param idxInGroup The logical index in the striped block group
+   * @param numDataBlocks The number of data blocks
+   * @param i The logical index in the striped block group
* @return The size of the internal block at the specified index
*/
-  public static long getInternalBlockLength(long numBytesInGroup,
-  int cellSize, int dataBlkNum, int idxInGroup) {
+  public static long getInternalBlockLength(long dataSize,
+  int cellSize, int numDataBlocks, int i) {
+Preconditions.checkArgument(dataSize = 0);
+Preconditions.checkArgument(cellSize  0);
+Preconditions.checkArgument(numDataBlocks  0);
+Preconditions.checkArgument(i = 0);
 // Size of each stripe (only counting data blocks)
-final long numBytesPerStripe = cellSize * dataBlkNum;
-assert numBytesPerStripe   0:
-getInternalBlockLength should only be called on valid striped blocks;
+final int stripeSize = cellSize * numDataBlocks;
 // If block group ends at stripe boundary, each internal block has an equal
 // share of the group
-if (numBytesInGroup % numBytesPerStripe == 0) {
-  return numBytesInGroup / dataBlkNum;
+final int lastStripeDataLen = (int)(dataSize % stripeSize);
+if (lastStripeDataLen == 0) {
+  return dataSize / numDataBlocks;
 }
 
-int numStripes = (int) ((numBytesInGroup - 1) / numBytesPerStripe + 1);
-assert numStripes = 1 : There should be at least 1 stripe;
-
-// All stripes but the last one are full stripes. The block should at least
-// contain (numStripes - 1) full cells.
-long blkSize = (numStripes - 1) * cellSize;
-
-long lastStripeLen = numBytesInGroup % numBytesPerStripe;
-// Size of parity cells should equal the size of the first cell, if it
-// is not full.
-long lastParityCellLen = Math.min(cellSize, lastStripeLen);
-
-if (idxInGroup = dataBlkNum) {
-  // for 

[08/50] hadoop git commit: HDFS-8136. Client gets and uses EC schema when reads and writes a stripping file. Contributed by Kai Sasaki

2015-05-18 Thread zhz
HDFS-8136. Client gets and uses EC schema when reads and writes a stripping 
file. Contributed by Kai Sasaki


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b3cdbc8e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b3cdbc8e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b3cdbc8e

Branch: refs/heads/HDFS-7285
Commit: b3cdbc8e9e62b49bee2b4ce39c3aeedddfbe7f39
Parents: 0df2b78
Author: Kai Zheng kai.zh...@intel.com
Authored: Fri Apr 24 00:19:12 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:47 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/DFSStripedInputStream.java  |  17 +-
 .../hadoop/hdfs/DFSStripedOutputStream.java |  24 ++-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 175 +++
 .../hadoop/hdfs/TestDFSStripedOutputStream.java |   4 +-
 .../apache/hadoop/hdfs/TestReadStripedFile.java |   1 -
 6 files changed, 209 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3cdbc8e/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index b2faac0..8977c46 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -119,3 +119,6 @@
 
 HDFS-8156. Add/implement necessary APIs even we just have the system 
default 
 schema. (Kai Zheng via Zhe Zhang)
+
+HDFS-8136. Client gets and uses EC schema when reads and writes a stripping
+file. (Kai Sasaki via Kai Zheng)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3cdbc8e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index d597407..d0e2b68 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -21,9 +21,9 @@ import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
+import org.apache.hadoop.hdfs.protocol.ECInfo;
 import org.apache.hadoop.hdfs.server.namenode.UnsupportedActionException;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 import org.apache.hadoop.net.NetUtils;
@@ -125,13 +125,19 @@ public class DFSStripedInputStream extends DFSInputStream 
{
 return results;
   }
 
-  private int cellSize = HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
-  private final short dataBlkNum = HdfsConstants.NUM_DATA_BLOCKS;
-  private final short parityBlkNum = HdfsConstants.NUM_PARITY_BLOCKS;
+  private final int cellSize;
+  private final short dataBlkNum;
+  private final short parityBlkNum;
+  private final ECInfo ecInfo;
 
   DFSStripedInputStream(DFSClient dfsClient, String src, boolean 
verifyChecksum)
   throws IOException {
 super(dfsClient, src, verifyChecksum);
+// ECInfo is restored from NN just before reading striped file.
+ecInfo = dfsClient.getErasureCodingInfo(src);
+cellSize = ecInfo.getSchema().getChunkSize();
+dataBlkNum = (short)ecInfo.getSchema().getNumDataUnits();
+parityBlkNum = (short)ecInfo.getSchema().getNumParityUnits();
 DFSClient.LOG.debug(Creating an striped input stream for file  + src);
   }
 
@@ -279,9 +285,6 @@ public class DFSStripedInputStream extends DFSInputStream {
 throw new InterruptedException(let's retry);
   }
 
-  public void setCellSize(int cellSize) {
-this.cellSize = cellSize;
-  }
 
   /**
* This class represents the portion of I/O associated with each block in the

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3cdbc8e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index 

[48/50] hadoop git commit: HADOOP-11920. Refactor some codes for erasure coders. Contributed by Kai Zheng.

2015-05-18 Thread zhz
HADOOP-11920. Refactor some codes for erasure coders. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6e2ccdec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6e2ccdec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6e2ccdec

Branch: refs/heads/HDFS-7285
Commit: 6e2ccdec6b0ec7502965a70278baec4637b22934
Parents: 768b992
Author: Zhe Zhang z...@apache.org
Authored: Mon May 18 10:09:57 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:09:57 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  2 +
 .../hadoop/fs/CommonConfigurationKeys.java  |  4 --
 .../apache/hadoop/io/erasurecode/ECChunk.java   |  2 +-
 .../erasurecode/coder/AbstractErasureCoder.java |  6 +-
 .../io/erasurecode/coder/RSErasureDecoder.java  | 40 +
 .../rawcoder/AbstractRawErasureCoder.java   | 63 +++-
 .../rawcoder/AbstractRawErasureDecoder.java | 54 ++---
 .../rawcoder/AbstractRawErasureEncoder.java | 52 +++-
 .../erasurecode/rawcoder/RawErasureCoder.java   |  8 +--
 .../erasurecode/rawcoder/RawErasureDecoder.java | 24 +---
 .../io/erasurecode/rawcoder/XORRawDecoder.java  | 24 ++--
 .../io/erasurecode/rawcoder/XORRawEncoder.java  |  6 +-
 .../hadoop/io/erasurecode/TestCoderBase.java|  4 +-
 .../erasurecode/coder/TestRSErasureCoder.java   |  6 +-
 14 files changed, 156 insertions(+), 139 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e2ccdec/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index c10ffbd..a152e31 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -46,3 +46,5 @@
 HADOOP-11841. Remove unused ecschema-def.xml files.  (szetszwo)
 
 HADOOP-11921. Enhance tests for erasure coders. (Kai Zheng via Zhe Zhang)
+
+HADOOP-11920. Refactor some codes for erasure coders. (Kai Zheng via Zhe 
Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e2ccdec/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index bd2a24b..3f2871b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -143,10 +143,6 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   /** Supported erasure codec classes */
   public static final String IO_ERASURECODE_CODECS_KEY = 
io.erasurecode.codecs;
 
-  /** Use XOR raw coder when possible for the RS codec */
-  public static final String IO_ERASURECODE_CODEC_RS_USEXOR_KEY =
-  io.erasurecode.codec.rs.usexor;
-
   /** Raw coder factory for the RS codec */
   public static final String IO_ERASURECODE_CODEC_RS_RAWCODER_KEY =
   io.erasurecode.codec.rs.rawcoder;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e2ccdec/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index 01e8f35..436e13e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -71,7 +71,7 @@ public class ECChunk {
* @param chunks
* @return an array of byte array
*/
-  public static byte[][] toArray(ECChunk[] chunks) {
+  public static byte[][] toArrays(ECChunk[] chunks) {
 byte[][] bytesArr = new byte[chunks.length][];
 
 ByteBuffer buffer;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e2ccdec/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 

[18/50] hadoop git commit: Fix merge conflicts.

2015-05-18 Thread zhz
Fix merge conflicts.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0c0db83c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0c0db83c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0c0db83c

Branch: refs/heads/HDFS-7285
Commit: 0c0db83ca64db02cfbeddb9a9918778fb89d0d60
Parents: 5be2100
Author: Jing Zhao ji...@apache.org
Authored: Wed Apr 29 11:35:58 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:49 2015 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSInputStream.java  |  7 +++
 .../apache/hadoop/hdfs/DFSStripedOutputStream.java   | 15 ---
 .../org/apache/hadoop/hdfs/StripedDataStreamer.java  |  7 ---
 3 files changed, 11 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c0db83c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index 6eb25d0..bef4da0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -1116,7 +1116,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   /**
* Read data from one DataNode.
* @param datanode the datanode from which to read data
-   * @param block the block to read
+   * @param blockStartOffset starting offset in the file
* @param startInBlk the startInBlk offset of the block
* @param endInBlk the endInBlk offset of the block
* @param buf the given byte array into which the data is read
@@ -1146,7 +1146,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   BlockReader reader = null;
   try {
 DFSClientFaultInjector.get().fetchFromDatanodeException();
-reader = getBlockReader(block, start, len, datanode.addr,
+reader = getBlockReader(block, startInBlk, len, datanode.addr,
 datanode.storageType, datanode.info);
 for (int i = 0; i  offsets.length; i++) {
   int nread = reader.readAll(buf, offsets[i], lengths[i]);
@@ -1203,8 +1203,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
* with each other.
*/
   private void checkReadPortions(int[] offsets, int[] lengths, int totalLen) {
-Preconditions.checkArgument(offsets.length == lengths.length 
-offsets.length  0);
+Preconditions.checkArgument(offsets.length == lengths.length  
offsets.length  0);
 int sum = 0;
 for (int i = 0; i  lengths.length; i++) {
   if (i  0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c0db83c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index 6842267..c930187 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -124,10 +124,7 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
 for (short i = 0; i  numAllBlocks; i++) {
   StripedDataStreamer streamer = new StripedDataStreamer(stat, null,
   dfsClient, src, progress, checksum, cachingStrategy, 
byteArrayManager,
-  i, stripeBlocks);
-  if (favoredNodes != null  favoredNodes.length != 0) {
-streamer.setFavoredNodes(favoredNodes);
-  }
+  i, stripeBlocks, favoredNodes);
   s.add(streamer);
 }
 streamers = Collections.unmodifiableList(s);
@@ -316,7 +313,7 @@ public class DFSStripedOutputStream extends DFSOutputStream 
{
   return;
 }
 for (StripedDataStreamer streamer : streamers) {
-  streamer.setLastException(new IOException(Lease timeout of 
+  streamer.getLastException().set(new IOException(Lease timeout of 
   + (dfsClient.getConf().getHdfsTimeout()/1000) +
seconds expired.));
 }
@@ -414,12 +411,8 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
   @Override
   protected synchronized void closeImpl() throws IOException {
 if (isClosed()) {
-  IOException e = getLeadingStreamer().getLastException().getAndSet(null);
-  if (e != null) {
-throw e;
-  } else {
-

[17/50] hadoop git commit: HDFS-8272. Erasure Coding: simplify the retry logic in DFSStripedInputStream (stateful read). Contributed by Jing Zhao

2015-05-18 Thread zhz
HDFS-8272. Erasure Coding: simplify the retry logic in DFSStripedInputStream 
(stateful read). Contributed by Jing Zhao


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8620d403
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8620d403
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8620d403

Branch: refs/heads/HDFS-7285
Commit: 8620d403cda63a8fe18656df9727ea551561a2cc
Parents: 0c0db83
Author: Zhe Zhang z...@apache.org
Authored: Wed Apr 29 15:53:31 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:49 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/DFSStripedInputStream.java  | 336 ---
 2 files changed, 150 insertions(+), 189 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8620d403/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 9b4bf24..6a9bdee 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -143,3 +143,6 @@
 
 HDFS-8235. Erasure Coding: Create DFSStripedInputStream in DFSClient#open.
 (Kai Sasaki via jing9)
+
+HDFS-8272. Erasure Coding: simplify the retry logic in 
DFSStripedInputStream 
+(stateful read). (Jing Zhao via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8620d403/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index f6f7ed2..3da7306 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -22,11 +22,8 @@ import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.*;
 import 
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
-import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
-import org.apache.hadoop.hdfs.server.datanode.CachingStrategy;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 import org.apache.hadoop.net.NetUtils;
-import org.apache.hadoop.security.token.Token;
 import org.apache.htrace.Span;
 import org.apache.htrace.Trace;
 import org.apache.htrace.TraceScope;
@@ -126,23 +123,42 @@ public class DFSStripedInputStream extends DFSInputStream 
{
 return results;
   }
 
+  private static class ReaderRetryPolicy {
+private int fetchEncryptionKeyTimes = 1;
+private int fetchTokenTimes = 1;
+
+void refetchEncryptionKey() {
+  fetchEncryptionKeyTimes--;
+}
+
+void refetchToken() {
+  fetchTokenTimes--;
+}
+
+boolean shouldRefetchEncryptionKey() {
+  return fetchEncryptionKeyTimes  0;
+}
+
+boolean shouldRefetchToken() {
+  return fetchTokenTimes  0;
+}
+  }
+
   private final short groupSize = HdfsConstants.NUM_DATA_BLOCKS;
-  private BlockReader[] blockReaders = null;
-  private DatanodeInfo[] currentNodes = null;
+  private final BlockReader[] blockReaders = new BlockReader[groupSize];
+  private final DatanodeInfo[] currentNodes = new DatanodeInfo[groupSize];
   private final int cellSize;
   private final short dataBlkNum;
   private final short parityBlkNum;
-  private final ECInfo ecInfo;
 
-  DFSStripedInputStream(DFSClient dfsClient, String src, boolean 
verifyChecksum, ECInfo info)
-  throws IOException {
+  DFSStripedInputStream(DFSClient dfsClient, String src, boolean 
verifyChecksum,
+  ECInfo ecInfo) throws IOException {
 super(dfsClient, src, verifyChecksum);
 // ECInfo is restored from NN just before reading striped file.
-assert info != null;
-ecInfo = info;
+assert ecInfo != null;
 cellSize = ecInfo.getSchema().getChunkSize();
-dataBlkNum = (short)ecInfo.getSchema().getNumDataUnits();
-parityBlkNum = (short)ecInfo.getSchema().getNumParityUnits();
+dataBlkNum = (short) ecInfo.getSchema().getNumDataUnits();
+parityBlkNum = (short) ecInfo.getSchema().getNumParityUnits();
 DFSClient.LOG.debug(Creating an striped input stream for file  + src);
   }
 
@@ -162,9 +178,7 @@ public class DFSStripedInputStream extends DFSInputStream {
* When seeking into a new block group, create blockReader for each internal

[40/50] hadoop git commit: HDFS-8363. Erasure Coding: DFSStripedInputStream#seekToNewSource. (yliu)

2015-05-18 Thread zhz
HDFS-8363. Erasure Coding: DFSStripedInputStream#seekToNewSource. (yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cc5b07e3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cc5b07e3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cc5b07e3

Branch: refs/heads/HDFS-7285
Commit: cc5b07e3e75287f1aca24571d336aa57221c612f
Parents: e9ac9f8
Author: yliu y...@apache.org
Authored: Wed May 13 08:48:56 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:01 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt |  2 ++
 .../apache/hadoop/hdfs/DFSStripedInputStream.java| 15 ---
 2 files changed, 14 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc5b07e3/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 79ad208..0a2bb9e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -204,3 +204,5 @@
 
 HDFS-8368. Erasure Coding: DFS opening a non-existent file need to be 
 handled properly (Rakesh R via zhz)
+
+HDFS-8363. Erasure Coding: DFSStripedInputStream#seekToNewSource. (yliu)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc5b07e3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 7678fae..8f15eda 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -130,12 +130,12 @@ public class DFSStripedInputStream extends DFSInputStream 
{
 }
   }
 
-  private final short groupSize = HdfsConstants.NUM_DATA_BLOCKS;
-  private final BlockReader[] blockReaders = new BlockReader[groupSize];
-  private final DatanodeInfo[] currentNodes = new DatanodeInfo[groupSize];
+  private final BlockReader[] blockReaders;
+  private final DatanodeInfo[] currentNodes;
   private final int cellSize;
   private final short dataBlkNum;
   private final short parityBlkNum;
+  private final short groupSize;
   /** the buffer for a complete stripe */
   private ByteBuffer curStripeBuf;
   private final ECSchema schema;
@@ -155,6 +155,9 @@ public class DFSStripedInputStream extends DFSInputStream {
 cellSize = schema.getChunkSize();
 dataBlkNum = (short) schema.getNumDataUnits();
 parityBlkNum = (short) schema.getNumParityUnits();
+groupSize = dataBlkNum;
+blockReaders = new BlockReader[groupSize];
+currentNodes = new DatanodeInfo[groupSize];
 curStripeRange = new StripeRange(0, 0);
 readingService =
 new ExecutorCompletionService(dfsClient.getStripedReadsThreadPool());
@@ -392,6 +395,12 @@ public class DFSStripedInputStream extends DFSInputStream {
   }
 
   @Override
+  public synchronized boolean seekToNewSource(long targetPos)
+  throws IOException {
+return false;
+  }
+
+  @Override
   protected synchronized int readWithStrategy(ReaderStrategy strategy,
   int off, int len) throws IOException {
 dfsClient.checkOpen();



[38/50] hadoop git commit: HDFS-8368. Erasure Coding: DFS opening a non-existent file need to be handled properly. Contributed by Rakesh R.

2015-05-18 Thread zhz
HDFS-8368. Erasure Coding: DFS opening a non-existent file need to be handled 
properly. Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e9ac9f80
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e9ac9f80
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e9ac9f80

Branch: refs/heads/HDFS-7285
Commit: e9ac9f80d272c32e3f5da3fe5144cdd4b68661b5
Parents: 536db0c
Author: Zhe Zhang z...@apache.org
Authored: Tue May 12 14:31:28 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:01 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java | 12 +++-
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9ac9f80/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index f026a5c..79ad208 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -201,3 +201,6 @@
 
 HDFS-8372. Erasure coding: compute storage type quotas for striped files,
 to be consistent with HDFS-8327. (Zhe Zhang via jing9)
+
+HDFS-8368. Erasure Coding: DFS opening a non-existent file need to be 
+handled properly (Rakesh R via zhz)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9ac9f80/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 12c4a4b..cde1fc8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1191,12 +1191,14 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 //Get block info from namenode
 TraceScope scope = getPathTraceScope(newDFSInputStream, src);
 try {
-  ECSchema schema = getFileInfo(src).getECSchema();
-  if (schema != null) {
-return new DFSStripedInputStream(this, src, verifyChecksum, schema);
-  } else {
-return new DFSInputStream(this, src, verifyChecksum);
+  HdfsFileStatus fileInfo = getFileInfo(src);
+  if (fileInfo != null) {
+ECSchema schema = fileInfo.getECSchema();
+if (schema != null) {
+  return new DFSStripedInputStream(this, src, verifyChecksum, schema);
+}
   }
+  return new DFSInputStream(this, src, verifyChecksum);
 } finally {
   scope.close();
 }



[03/50] hadoop git commit: HDFS-8216. TestDFSStripedOutputStream should use BlockReaderTestUtil to create BlockReader. Contributed by Tsz Wo Nicholas Sze.

2015-05-18 Thread zhz
HDFS-8216. TestDFSStripedOutputStream should use BlockReaderTestUtil to create 
BlockReader. Contributed by Tsz Wo Nicholas Sze.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/25f53002
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/25f53002
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/25f53002

Branch: refs/heads/HDFS-7285
Commit: 25f53002c23c77dd0eed922a79d5c5b183e7d7fe
Parents: aaab49d
Author: Zhe Zhang z...@apache.org
Authored: Tue Apr 21 20:56:39 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:46 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../apache/hadoop/hdfs/BlockReaderTestUtil.java |  7 +--
 .../hadoop/hdfs/TestBlockReaderFactory.java | 16 +++---
 .../hadoop/hdfs/TestDFSStripedOutputStream.java | 58 ++--
 4 files changed, 20 insertions(+), 64 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/25f53002/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 8f28285..d8f2e9d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -107,3 +107,6 @@
 
 HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.
 (szetszwo)
+
+HDFS-8216. TestDFSStripedOutputStream should use BlockReaderTestUtil to 
+create BlockReader. (szetszwo via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/25f53002/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
index 88b7f37..829cf03 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
@@ -165,20 +165,19 @@ public class BlockReaderTestUtil {
*/
   public BlockReader getBlockReader(LocatedBlock testBlock, int offset, int 
lenToRead)
   throws IOException {
-return getBlockReader(cluster, testBlock, offset, lenToRead);
+return getBlockReader(cluster.getFileSystem(), testBlock, offset, 
lenToRead);
   }
 
   /**
* Get a BlockReader for the given block.
*/
-  public static BlockReader getBlockReader(MiniDFSCluster cluster,
-  LocatedBlock testBlock, int offset, int lenToRead) throws IOException {
+  public static BlockReader getBlockReader(final DistributedFileSystem fs,
+  LocatedBlock testBlock, int offset, long lenToRead) throws IOException {
 InetSocketAddress targetAddr = null;
 ExtendedBlock block = testBlock.getBlock();
 DatanodeInfo[] nodes = testBlock.getLocations();
 targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
 
-final DistributedFileSystem fs = cluster.getFileSystem();
 return new BlockReaderFactory(fs.getClient().getConf()).
   setInetSocketAddress(targetAddr).
   setBlock(block).

http://git-wip-us.apache.org/repos/asf/hadoop/blob/25f53002/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
index d8aceff..1a767c3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
@@ -250,8 +250,8 @@ public class TestBlockReaderFactory {
   LocatedBlock lblock = locatedBlocks.get(0); // first block
   BlockReader blockReader = null;
   try {
-blockReader = BlockReaderTestUtil.
-getBlockReader(cluster, lblock, 0, TEST_FILE_LEN);
+blockReader = BlockReaderTestUtil.getBlockReader(
+cluster.getFileSystem(), lblock, 0, TEST_FILE_LEN);
 Assert.fail(expected getBlockReader to fail the first time.);
   } catch (Throwable t) { 
 Assert.assertTrue(expected to see 'TCP reads were disabled  +
@@ -265,8 +265,8 @@ public class TestBlockReaderFactory {
 
   // Second time should succeed.
   

[21/50] hadoop git commit: HDFS-7949. WebImageViewer need support file size calculation with striped blocks. Contributed by Rakesh R.

2015-05-18 Thread zhz
HDFS-7949. WebImageViewer need support file size calculation with striped 
blocks. Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bdd264f4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bdd264f4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bdd264f4

Branch: refs/heads/HDFS-7285
Commit: bdd264f497693f47f99820f000427153f6dda719
Parents: 889529b
Author: Zhe Zhang z...@apache.org
Authored: Fri May 1 15:59:58 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:50 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../blockmanagement/BlockInfoStriped.java   |  27 +--
 .../tools/offlineImageViewer/FSImageLoader.java |  21 ++-
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  22 +++
 ...TestOfflineImageViewerWithStripedBlocks.java | 166 +++
 5 files changed, 212 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bdd264f4/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 596bbcf..145494f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -155,3 +155,6 @@
 
 HDFS-8308. Erasure Coding: NameNode may get blocked in 
waitForLoadingFSImage()
 when loading editlog. (jing9)
+
+HDFS-7949. WebImageViewer need support file size calculation with striped 
+blocks. (Rakesh R via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bdd264f4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index 23e3153..f0e52e3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -19,9 +19,7 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
-
-import java.io.DataOutput;
-import java.io.IOException;
+import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 
 import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
 
@@ -203,28 +201,9 @@ public class BlockInfoStriped extends BlockInfo {
 // In case striped blocks, total usage by this striped blocks should
 // be the total of data blocks and parity blocks because
 // `getNumBytes` is the total of actual data block size.
-
-// 0. Calculate the total bytes per stripes Num Bytes per Stripes
-long numBytesPerStripe = dataBlockNum * BLOCK_STRIPED_CELL_SIZE;
-if (getNumBytes() % numBytesPerStripe == 0) {
-  return getNumBytes() / dataBlockNum * getTotalBlockNum();
+return StripedBlockUtil.spaceConsumedByStripedBlock(getNumBytes(),
+dataBlockNum, parityBlockNum, BLOCK_STRIPED_CELL_SIZE);
 }
-// 1. Calculate the number of stripes in this block group. Num Stripes
-long numStripes = (getNumBytes() - 1) / numBytesPerStripe + 1;
-// 2. Calculate the parity cell length in the last stripe. Note that the
-//size of parity cells should equal the size of the first cell, if it
-//is not full. Last Stripe Parity Cell Length
-long lastStripeParityCellLen = Math.min(getNumBytes() % numBytesPerStripe,
-BLOCK_STRIPED_CELL_SIZE);
-// 3. Total consumed space is the total of
-// - The total of the full cells of data blocks and parity blocks.
-// - The remaining of data block which does not make a stripe.
-// - The last parity block cells. These size should be same
-//   to the first cell in this stripe.
-return getTotalBlockNum() * (BLOCK_STRIPED_CELL_SIZE * (numStripes - 1))
-+ getNumBytes() % numBytesPerStripe
-+ lastStripeParityCellLen * parityBlockNum;
-  }
 
   @Override
   public final boolean isStriped() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bdd264f4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
--
diff --git 

[Hadoop Wiki] Update of Roadmap by SomeOtherAccount

2015-05-18 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The Roadmap page has been changed by SomeOtherAccount:
https://wiki.apache.org/hadoop/Roadmap?action=diffrev1=54rev2=55

* Removal of hftp in favor of webhdfs 
[[https://issues.apache.org/jira/browse/HDFS-5570|HDFS-5570]]
   * YARN
   * MAPREDUCE
+* Derive heap size or mapreduce.*.memory.mb automatically 
[[https://issues.apache.org/jira/browse/MAPREDUCE-5785|MAPREDUCE-5785]]
  
  == Hadoop 2.x Releases ==
  


[12/50] hadoop git commit: HDFS-8230. Erasure Coding: Ignore DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY commands from standbynode if any (Contributed by Vinayakumar B)

2015-05-18 Thread zhz
HDFS-8230. Erasure Coding: Ignore DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY 
commands from standbynode if any (Contributed by Vinayakumar B)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6506d7d8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6506d7d8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6506d7d8

Branch: refs/heads/HDFS-7285
Commit: 6506d7d8b8d5d8c916bd56c63467e0b2cc1110b0
Parents: c79ec3c
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Apr 28 14:14:33 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:48 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  | 3 +++
 .../org/apache/hadoop/hdfs/server/datanode/BPOfferService.java| 1 +
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6506d7d8/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index e8db485..c28473b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -134,3 +134,6 @@
 
 HDFS-8033. Erasure coding: stateful (non-positional) read from files in 
 striped layout (Zhe Zhang)
+
+HDFS-8230. Erasure Coding: Ignore 
DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY 
+commands from standbynode if any (vinayakumarb)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6506d7d8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
index 69baac7..6606d0b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
@@ -757,6 +757,7 @@ class BPOfferService {
 case DatanodeProtocol.DNA_BALANCERBANDWIDTHUPDATE:
 case DatanodeProtocol.DNA_CACHE:
 case DatanodeProtocol.DNA_UNCACHE:
+case DatanodeProtocol.DNA_ERASURE_CODING_RECOVERY:
   LOG.warn(Got a command from standby NN - ignoring command: + 
cmd.getAction());
   break;
 default:



[15/50] hadoop git commit: HDFS-8223. Should calculate checksum for parity blocks in DFSStripedOutputStream. Contributed by Yi Liu.

2015-05-18 Thread zhz
HDFS-8223. Should calculate checksum for parity blocks in 
DFSStripedOutputStream. Contributed by Yi Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b4a33f26
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b4a33f26
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b4a33f26

Branch: refs/heads/HDFS-7285
Commit: b4a33f2667093e99ee499467b4771edc77284d9d
Parents: d00ff69
Author: Jing Zhao ji...@apache.org
Authored: Thu Apr 23 15:48:21 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:48 2015 -0700

--
 .../main/java/org/apache/hadoop/fs/FSOutputSummer.java|  4 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  |  3 +++
 .../org/apache/hadoop/hdfs/DFSStripedOutputStream.java| 10 ++
 3 files changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4a33f26/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
index bdc5585..a8a7494 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
@@ -196,6 +196,10 @@ abstract public class FSOutputSummer extends OutputStream {
 return sum.getChecksumSize();
   }
 
+  protected DataChecksum getDataChecksum() {
+return sum;
+  }
+
   protected TraceScope createWriteTraceScope() {
 return NullScope.INSTANCE;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4a33f26/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 48791b1..9357e23 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -125,3 +125,6 @@
 
 HDFS-8233. Fix DFSStripedOutputStream#getCurrentBlockGroupBytes when the 
last
 stripe is at the block group boundary. (jing9)
+
+HDFS-8223. Should calculate checksum for parity blocks in 
DFSStripedOutputStream.
+(Yi Liu via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4a33f26/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index 245dfc1..6842267 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -62,6 +62,8 @@ public class DFSStripedOutputStream extends DFSOutputStream {
*/
   private final ECInfo ecInfo;
   private final int cellSize;
+  // checksum buffer, we only need to calculate checksum for parity blocks
+  private byte[] checksumBuf;
   private ByteBuffer[] cellBuffers;
 
   private final short numAllBlocks;
@@ -99,6 +101,7 @@ public class DFSStripedOutputStream extends DFSOutputStream {
 
 checkConfiguration();
 
+checksumBuf = new byte[getChecksumSize() * (cellSize / bytesPerChecksum)];
 cellBuffers = new ByteBuffer[numAllBlocks];
 ListBlockingQueueLocatedBlock stripeBlocks = new ArrayList();
 
@@ -179,6 +182,10 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
   private ListDFSPacket generatePackets(ByteBuffer byteBuffer)
   throws IOException{
 ListDFSPacket packets = new ArrayList();
+assert byteBuffer.hasArray();
+getDataChecksum().calculateChunkedSums(byteBuffer.array(), 0,
+byteBuffer.remaining(), checksumBuf, 0);
+int ckOff = 0;
 while (byteBuffer.remaining()  0) {
   DFSPacket p = createPacket(packetSize, chunksPerPacket,
   streamer.getBytesCurBlock(),
@@ -186,6 +193,9 @@ public class DFSStripedOutputStream extends DFSOutputStream 
{
   int maxBytesToPacket = p.getMaxChunks() * bytesPerChecksum;
   int toWrite = byteBuffer.remaining()  maxBytesToPacket ?
   maxBytesToPacket: byteBuffer.remaining();
+  int ckLen = ((toWrite - 1) / bytesPerChecksum + 1) * getChecksumSize();
+  p.writeChecksum(checksumBuf, ckOff, ckLen);
+  ckOff += ckLen;
   p.writeData(byteBuffer, 

[11/50] hadoop git commit: HDFS-8024. Erasure Coding: ECworker frame, basics, bootstraping and configuration. (Contributed by Uma Maheswara Rao G)

2015-05-18 Thread zhz
HDFS-8024. Erasure Coding: ECworker frame, basics, bootstraping and 
configuration. (Contributed by Uma Maheswara Rao G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/999b25f4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/999b25f4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/999b25f4

Branch: refs/heads/HDFS-7285
Commit: 999b25f415405e2af8852cf0458aee91dcbc72f5
Parents: 656e841
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Wed Apr 22 19:30:14 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:47 2015 -0700

--
 .../erasurecode/coder/AbstractErasureCoder.java |  2 +-
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  7 ++
 .../hdfs/server/datanode/BPOfferService.java|  6 ++
 .../hadoop/hdfs/server/datanode/DataNode.java   | 10 +++
 .../erasurecode/ErasureCodingWorker.java| 83 
 .../src/main/proto/DatanodeProtocol.proto   |  2 +
 7 files changed, 112 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/999b25f4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
index e5bf11a..7403e35 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
@@ -66,7 +66,7 @@ public abstract class AbstractErasureCoder
* @param isEncoder
* @return raw coder
*/
-  protected static RawErasureCoder createRawCoder(Configuration conf,
+  public static RawErasureCoder createRawCoder(Configuration conf,
   String rawCoderFactoryKey, boolean isEncoder) {
 
 if (conf == null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/999b25f4/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 3d86f05..1acde41 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -113,3 +113,6 @@
 
 HDFS-8212. DistributedFileSystem.createErasureCodingZone should pass schema
 in FileSystemLinkResolver. (szetszwo via Zhe Zhang)
+
+HDFS-8024. Erasure Coding: ECworker frame, basics, bootstraping and 
configuration.
+(umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/999b25f4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index c127b5f..68cfe7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -973,6 +973,8 @@ public class PBHelper {
   return REG_CMD;
 case BlockIdCommand:
   return PBHelper.convert(proto.getBlkIdCmd());
+case BlockECRecoveryCommand:
+  return PBHelper.convert(proto.getBlkECRecoveryCmd());
 default:
   return null;
 }
@@ -1123,6 +1125,11 @@ public class PBHelper {
   builder.setCmdType(DatanodeCommandProto.Type.BlockIdCommand).
 setBlkIdCmd(PBHelper.convert((BlockIdCommand) datanodeCommand));
   break;
+case DatanodeProtocol.DNA_ERASURE_CODING_RECOVERY:
+  builder.setCmdType(DatanodeCommandProto.Type.BlockECRecoveryCommand)
+  .setBlkECRecoveryCmd(
+  convert((BlockECRecoveryCommand) datanodeCommand));
+  break;
 case DatanodeProtocol.DNA_UNKNOWN: //Not expected
 default:
   builder.setCmdType(DatanodeCommandProto.Type.NullDatanodeCommand);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/999b25f4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
--
diff --git 

[31/50] hadoop git commit: HDFS-8334. Erasure coding: rename DFSStripedInputStream related test classes. Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-8334. Erasure coding: rename DFSStripedInputStream related test classes. 
Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1f46698f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1f46698f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1f46698f

Branch: refs/heads/HDFS-7285
Commit: 1f46698fa773c65a6b0c6d1fdebd586b080d309a
Parents: 0b00fe8
Author: Zhe Zhang z...@apache.org
Authored: Wed May 6 15:34:37 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:59 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   5 +
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 365 ---
 .../apache/hadoop/hdfs/TestReadStripedFile.java | 218 ---
 .../hadoop/hdfs/TestWriteReadStripedFile.java   | 261 +
 4 files changed, 427 insertions(+), 422 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f46698f/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 0d2d448..8729f8a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -178,3 +178,8 @@
 
 HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. 
 (Yi Liu via Zhe Zhang)
+
+HADOOP-11921. Enhance tests for erasure coders. (Kai Zheng)
+
+HDFS-8334. Erasure coding: rename DFSStripedInputStream related test 
+classes. (Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f46698f/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
index 11cdf7b..a1f704d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
@@ -17,245 +17,202 @@
  */
 package org.apache.hadoop.hdfs;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FileStatus;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ECInfo;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
-
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.BeforeClass;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset;
+import org.apache.hadoop.hdfs.server.namenode.ECSchemaManager;
+import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.junit.After;
+import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
 
 public class TestDFSStripedInputStream {
-  private static int dataBlocks = HdfsConstants.NUM_DATA_BLOCKS;
-  private static int parityBlocks = HdfsConstants.NUM_PARITY_BLOCKS;
-
-
-  private static DistributedFileSystem fs;
-  private final static int cellSize = HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
-  private final static int stripesPerBlock = 4;
-  static int blockSize = cellSize * stripesPerBlock;
-  static int numDNs = dataBlocks + parityBlocks + 2;
-
-  private static MiniDFSCluster cluster;
 
-  @BeforeClass
-  public static void setup() throws IOException {
-Configuration conf = new Configuration();
-conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
-cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDNs).build();
-cluster.getFileSystem().getClient().createErasureCodingZone(/, null);
+  public static final Log LOG = 
LogFactory.getLog(TestDFSStripedInputStream.class);
+
+  private MiniDFSCluster cluster;
+  private Configuration conf = new Configuration();
+  private 

[19/50] hadoop git commit: HDFS-8235. Erasure Coding: Create DFSStripedInputStream in DFSClient#open. Contributed by Kai Sasaki.

2015-05-18 Thread zhz
HDFS-8235. Erasure Coding: Create DFSStripedInputStream in DFSClient#open. 
Contributed by Kai Sasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5be21009
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5be21009
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5be21009

Branch: refs/heads/HDFS-7285
Commit: 5be21009a61433a86394ccfef42f530f71e38006
Parents: c1e85dc
Author: Jing Zhao ji...@apache.org
Authored: Tue Apr 28 13:42:24 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:49 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  5 -
 .../main/java/org/apache/hadoop/hdfs/DFSClient.java |  7 ++-
 .../apache/hadoop/hdfs/DFSStripedInputStream.java   |  5 +++--
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 16 +++-
 .../org/apache/hadoop/hdfs/TestReadStripedFile.java | 11 ---
 5 files changed, 28 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5be21009/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 6c5d7ce..9b4bf24 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -139,4 +139,7 @@
 commands from standbynode if any (vinayakumarb)
 
 HDFS-8189. ClientProtocol#createErasureCodingZone API was wrongly annotated
-as Idempotent (vinayakumarb)
\ No newline at end of file
+as Idempotent (vinayakumarb)
+
+HDFS-8235. Erasure Coding: Create DFSStripedInputStream in DFSClient#open.
+(Kai Sasaki via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5be21009/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index db13ae8..5fb23a0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1191,7 +1191,12 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 //Get block info from namenode
 TraceScope scope = getPathTraceScope(newDFSInputStream, src);
 try {
-  return new DFSInputStream(this, src, verifyChecksum);
+  ECInfo info = getErasureCodingInfo(src);
+  if (info != null) {
+return new DFSStripedInputStream(this, src, verifyChecksum, info);
+  } else {
+return new DFSInputStream(this, src, verifyChecksum);
+  }
 } finally {
   scope.close();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5be21009/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index fe9e101..f6f7ed2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -134,11 +134,12 @@ public class DFSStripedInputStream extends DFSInputStream 
{
   private final short parityBlkNum;
   private final ECInfo ecInfo;
 
-  DFSStripedInputStream(DFSClient dfsClient, String src, boolean 
verifyChecksum)
+  DFSStripedInputStream(DFSClient dfsClient, String src, boolean 
verifyChecksum, ECInfo info)
   throws IOException {
 super(dfsClient, src, verifyChecksum);
 // ECInfo is restored from NN just before reading striped file.
-ecInfo = dfsClient.getErasureCodingInfo(src);
+assert info != null;
+ecInfo = info;
 cellSize = ecInfo.getSchema().getChunkSize();
 dataBlkNum = (short)ecInfo.getSchema().getNumDataUnits();
 parityBlkNum = (short)ecInfo.getSchema().getNumParityUnits();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5be21009/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 

[50/50] hadoop git commit: HADOOP-11938. Enhance ByteBuffer version encode/decode API of raw erasure coder. Contributed by Kai Zheng.

2015-05-18 Thread zhz
HADOOP-11938. Enhance ByteBuffer version encode/decode API of raw erasure 
coder. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/031b4d09
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/031b4d09
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/031b4d09

Branch: refs/heads/HDFS-7285
Commit: 031b4d09d57fd8442664bc290ed73a0146de641d
Parents: a37e214
Author: Zhe Zhang z...@apache.org
Authored: Mon May 18 10:14:54 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:14:54 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   3 +
 .../apache/hadoop/io/erasurecode/ECChunk.java   |  35 ++---
 .../rawcoder/AbstractRawErasureCoder.java   |  77 +--
 .../rawcoder/AbstractRawErasureDecoder.java |  69 --
 .../rawcoder/AbstractRawErasureEncoder.java |  66 --
 .../io/erasurecode/rawcoder/RSRawDecoder.java   |  22 ++--
 .../io/erasurecode/rawcoder/RSRawEncoder.java   |  41 +++---
 .../io/erasurecode/rawcoder/XORRawDecoder.java  |  30 +++--
 .../io/erasurecode/rawcoder/XORRawEncoder.java  |  40 +++---
 .../erasurecode/rawcoder/util/GaloisField.java  | 112 
 .../hadoop/io/erasurecode/TestCoderBase.java| 131 +++
 .../erasurecode/coder/TestErasureCoderBase.java |  21 ++-
 .../io/erasurecode/rawcoder/TestRSRawCoder.java |  12 +-
 .../rawcoder/TestRSRawCoderBase.java|  12 +-
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  57 +++-
 .../erasurecode/rawcoder/TestXORRawCoder.java   |  19 +++
 16 files changed, 535 insertions(+), 212 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/031b4d09/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 34dfc9e..c799b4f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -51,3 +51,6 @@
 
 HADOOP-11566. Add tests and fix for erasure coders to recover erased 
parity 
 units. (Kai Zheng via Zhe Zhang)
+
+HADOOP-11938. Enhance ByteBuffer version encode/decode API of raw erasure 
+coder. (Kai Zheng via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/031b4d09/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index 69a8343..310c738 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -72,34 +72,15 @@ public class ECChunk {
   }
 
   /**
-   * Convert an array of this chunks to an array of byte array.
-   * Note the chunk buffers are not affected.
-   * @param chunks
-   * @return an array of byte array
+   * Convert to a bytes array, just for test usage.
+   * @return bytes array
*/
-  public static byte[][] toArrays(ECChunk[] chunks) {
-byte[][] bytesArr = new byte[chunks.length][];
-
-ByteBuffer buffer;
-ECChunk chunk;
-for (int i = 0; i  chunks.length; i++) {
-  chunk = chunks[i];
-  if (chunk == null) {
-bytesArr[i] = null;
-continue;
-  }
-
-  buffer = chunk.getBuffer();
-  if (buffer.hasArray()) {
-bytesArr[i] = buffer.array();
-  } else {
-bytesArr[i] = new byte[buffer.remaining()];
-// Avoid affecting the original one
-buffer.mark();
-buffer.get(bytesArr[i]);
-buffer.reset();
-  }
-}
+  public byte[] toBytesArray() {
+byte[] bytesArr = new byte[chunkBuffer.remaining()];
+// Avoid affecting the original one
+chunkBuffer.mark();
+chunkBuffer.get(bytesArr);
+chunkBuffer.reset();
 
 return bytesArr;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/031b4d09/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
index 2400313..5268962 100644
--- 

[32/50] hadoop git commit: HDFS-8129. Erasure Coding: Maintain consistent naming for Erasure Coding related classes - EC/ErasureCoding. Contributed by Uma Maheswara Rao G

2015-05-18 Thread zhz
HDFS-8129. Erasure Coding: Maintain consistent naming for Erasure Coding 
related classes - EC/ErasureCoding. Contributed by Uma Maheswara Rao G


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2bd83484
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2bd83484
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2bd83484

Branch: refs/heads/HDFS-7285
Commit: 2bd834840857079f8201bb3a7d9bfe842fd943e6
Parents: 1f46698
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Thu May 7 16:26:01 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:59 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  10 +-
 .../hadoop/hdfs/DFSStripedInputStream.java  |   2 +-
 .../hadoop/hdfs/DistributedFileSystem.java  |  10 +-
 .../hadoop/hdfs/protocol/ClientProtocol.java|   4 +-
 .../org/apache/hadoop/hdfs/protocol/ECInfo.java |  41 --
 .../apache/hadoop/hdfs/protocol/ECZoneInfo.java |  56 
 .../hadoop/hdfs/protocol/ErasureCodingInfo.java |  41 ++
 .../hdfs/protocol/ErasureCodingZoneInfo.java|  56 
 ...tNamenodeProtocolServerSideTranslatorPB.java |  18 +--
 .../ClientNamenodeProtocolTranslatorPB.java |  16 +--
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  24 ++--
 .../hdfs/server/namenode/ECSchemaManager.java   | 127 ---
 .../namenode/ErasureCodingSchemaManager.java| 127 +++
 .../namenode/ErasureCodingZoneManager.java  |  12 +-
 .../hdfs/server/namenode/FSDirectory.java   |   4 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  24 ++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |   8 +-
 .../hdfs/tools/erasurecode/ECCommand.java   |   4 +-
 .../src/main/proto/ClientNamenodeProtocol.proto |   4 +-
 .../src/main/proto/erasurecoding.proto  |  16 +--
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |   8 +-
 .../org/apache/hadoop/hdfs/TestECSchemas.java   |   2 +-
 .../hadoop/hdfs/TestErasureCodingZones.java |  10 +-
 .../hadoop/hdfs/protocolPB/TestPBHelper.java|  10 +-
 .../server/namenode/TestStripedINodeFile.java   |  16 +--
 26 files changed, 328 insertions(+), 325 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2bd83484/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 8729f8a..11e8376 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -183,3 +183,6 @@
 
 HDFS-8334. Erasure coding: rename DFSStripedInputStream related test 
 classes. (Zhe Zhang)
+
+HDFS-8129. Erasure Coding: Maintain consistent naming for Erasure Coding 
related classes - EC/ErasureCoding
+(umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2bd83484/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 63c27ef..71fdc34 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -118,8 +118,8 @@ import 
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.DirectoryListing;
-import org.apache.hadoop.hdfs.protocol.ECInfo;
-import org.apache.hadoop.hdfs.protocol.ECZoneInfo;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingInfo;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingZoneInfo;
 import org.apache.hadoop.hdfs.protocol.EncryptionZone;
 import org.apache.hadoop.hdfs.protocol.EncryptionZoneIterator;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
@@ -1191,7 +1191,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 //Get block info from namenode
 TraceScope scope = getPathTraceScope(newDFSInputStream, src);
 try {
-  ECInfo info = getErasureCodingInfo(src);
+  ErasureCodingInfo info = getErasureCodingInfo(src);
   if (info != null) {
 return new DFSStripedInputStream(this, src, verifyChecksum, info);
   } else {
@@ -3132,7 +3132,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 }
   }
 
-  public ECInfo 

[39/50] hadoop git commit: HDFS-8372. Erasure coding: compute storage type quotas for striped files, to be consistent with HDFS-8327. Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-8372. Erasure coding: compute storage type quotas for striped files, to be 
consistent with HDFS-8327. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/536db0c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/536db0c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/536db0c7

Branch: refs/heads/HDFS-7285
Commit: 536db0c7232c1bfc4f91c900f3d05f036f703927
Parents: e83d1b8
Author: Jing Zhao ji...@apache.org
Authored: Tue May 12 11:43:04 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:01 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 ++
 .../namenode/FileWithStripedBlocksFeature.java  | 12 +++--
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 53 +++-
 .../server/namenode/TestStripedINodeFile.java   | 22 
 4 files changed, 64 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/536db0c7/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 0acf746..f026a5c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -198,3 +198,6 @@
 
 HDFS-7678. Erasure coding: DFSInputStream with decode functionality 
(pread).
 (Zhe Zhang)
+
+HDFS-8372. Erasure coding: compute storage type quotas for striped files,
+to be consistent with HDFS-8327. (Zhe Zhang via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/536db0c7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
index 47445be..94ab527 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
@@ -21,6 +21,7 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
+import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction;
 
 /**
  * Feature for file with striped blocks
@@ -78,20 +79,23 @@ class FileWithStripedBlocksFeature implements INode.Feature 
{
 }
   }
 
-  boolean removeLastBlock(Block oldblock) {
+  BlockInfoStripedUnderConstruction removeLastBlock(
+  Block oldblock) {
 if (blocks == null || blocks.length == 0) {
-  return false;
+  return null;
 }
 int newSize = blocks.length - 1;
 if (!blocks[newSize].equals(oldblock)) {
-  return false;
+  return null;
 }
 
+BlockInfoStripedUnderConstruction uc =
+(BlockInfoStripedUnderConstruction) blocks[newSize];
 //copy to a new list
 BlockInfoStriped[] newlist = new BlockInfoStriped[newSize];
 System.arraycopy(blocks, 0, newlist, 0, newSize);
 setBlocks(newlist);
-return true;
+return uc;
   }
 
   void truncateStripedBlocks(int n) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/536db0c7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index cc18770..154198c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -43,6 +43,7 @@ import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction;
+import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction;
 import 

[16/50] hadoop git commit: HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil. Contributed by Zhe Zhang.

2015-05-18 Thread zhz
HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil. 
Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6dcb9b15
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6dcb9b15
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6dcb9b15

Branch: refs/heads/HDFS-7285
Commit: 6dcb9b15b43eded1291247f883f195971ae4f4fc
Parents: 8620d40
Author: Zhe Zhang z...@apache.org
Authored: Wed Apr 29 23:49:52 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:49 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/DFSStripedInputStream.java  | 111 +---
 .../hadoop/hdfs/util/StripedBlockUtil.java  | 174 +++
 .../hadoop/hdfs/TestPlanReadPortions.java   |  11 +-
 4 files changed, 186 insertions(+), 113 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6dcb9b15/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 6a9bdee..ca60487 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -146,3 +146,6 @@
 
 HDFS-8272. Erasure Coding: simplify the retry logic in 
DFSStripedInputStream 
 (stateful read). (Jing Zhao via Zhe Zhang)
+
+HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil.
+(Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6dcb9b15/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 3da7306..0dc98fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -17,12 +17,14 @@
  */
 package org.apache.hadoop.hdfs;
 
-import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.*;
 import 
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.ReadPortion;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions;
+
 import org.apache.hadoop.net.NetUtils;
 import org.apache.htrace.Span;
 import org.apache.htrace.Trace;
@@ -31,8 +33,6 @@ import org.apache.htrace.TraceScope;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.List;
 import java.util.Set;
 import java.util.Map;
 import java.util.HashMap;
@@ -69,59 +69,6 @@ import java.util.concurrent.Future;
  *   3. pread with decode support: TODO: will be supported after HDFS-7678
  */
 public class DFSStripedInputStream extends DFSInputStream {
-  /**
-   * This method plans the read portion from each block in the stripe
-   * @param dataBlkNum The number of data blocks in the striping group
-   * @param cellSize The size of each striping cell
-   * @param startInBlk Starting offset in the striped block
-   * @param len Length of the read request
-   * @param bufOffset  Initial offset in the result buffer
-   * @return array of {@link ReadPortion}, each representing the portion of I/O
-   * for an individual block in the group
-   */
-  @VisibleForTesting
-  static ReadPortion[] planReadPortions(final int dataBlkNum,
-  final int cellSize, final long startInBlk, final int len, int bufOffset) 
{
-ReadPortion[] results = new ReadPortion[dataBlkNum];
-for (int i = 0; i  dataBlkNum; i++) {
-  results[i] = new ReadPortion();
-}
-
-// cellIdxInBlk is the index of the cell in the block
-// E.g., cell_3 is the 2nd cell in blk_0
-int cellIdxInBlk = (int) (startInBlk / (cellSize * dataBlkNum));
-
-// blkIdxInGroup is the index of the block in the striped block group
-// E.g., blk_2 is the 3rd block in the group
-final int blkIdxInGroup = (int) (startInBlk / cellSize % dataBlkNum);
-results[blkIdxInGroup].startOffsetInBlock = cellSize * cellIdxInBlk +
-startInBlk % cellSize;
-boolean 

[47/50] hadoop git commit: HADOOP-11921. Enhance tests for erasure coders. Contributed by Kai Zheng.

2015-05-18 Thread zhz
HADOOP-11921. Enhance tests for erasure coders. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/768b992c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/768b992c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/768b992c

Branch: refs/heads/HDFS-7285
Commit: 768b992cc9868236a85c09c418dd96f15648be5e
Parents: 84562b4
Author: Zhe Zhang z...@apache.org
Authored: Mon May 18 10:06:56 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:06:56 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  2 +
 .../hadoop/io/erasurecode/TestCoderBase.java| 50 ++-
 .../erasurecode/coder/TestErasureCoderBase.java | 89 +++-
 .../erasurecode/coder/TestRSErasureCoder.java   | 64 ++
 .../io/erasurecode/coder/TestXORCoder.java  | 24 --
 .../io/erasurecode/rawcoder/TestRSRawCoder.java | 76 +
 .../rawcoder/TestRSRawCoderBase.java| 51 +++
 .../erasurecode/rawcoder/TestRawCoderBase.java  | 45 +-
 .../erasurecode/rawcoder/TestXORRawCoder.java   | 24 --
 9 files changed, 274 insertions(+), 151 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/768b992c/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 9749270..c10ffbd 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -44,3 +44,5 @@
 HADOOP-11818. Minor improvements for erasurecode classes. (Rakesh R via 
Kai Zheng)
 
 HADOOP-11841. Remove unused ecschema-def.xml files.  (szetszwo)
+
+HADOOP-11921. Enhance tests for erasure coders. (Kai Zheng via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/768b992c/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
index 22fd98d..be1924c 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
@@ -49,15 +49,15 @@ public abstract class TestCoderBase {
* Prepare before running the case.
* @param numDataUnits
* @param numParityUnits
-   * @param erasedIndexes
+   * @param erasedDataIndexes
*/
   protected void prepare(Configuration conf, int numDataUnits,
- int numParityUnits, int[] erasedIndexes) {
+ int numParityUnits, int[] erasedDataIndexes) {
 this.conf = conf;
 this.numDataUnits = numDataUnits;
 this.numParityUnits = numParityUnits;
-this.erasedDataIndexes = erasedIndexes != null ?
-erasedIndexes : new int[] {0};
+this.erasedDataIndexes = erasedDataIndexes != null ?
+erasedDataIndexes : new int[] {0};
   }
 
   /**
@@ -82,15 +82,19 @@ public abstract class TestCoderBase {
   }
 
   /**
-   * Adjust and return erased indexes based on the array of the input chunks (
-   * parity chunks + data chunks).
-   * @return
+   * Adjust and return erased indexes altogether, including erased data indexes
+   * and parity indexes.
+   * @return erased indexes altogether
*/
   protected int[] getErasedIndexesForDecoding() {
 int[] erasedIndexesForDecoding = new int[erasedDataIndexes.length];
+
+int idx = 0;
+
 for (int i = 0; i  erasedDataIndexes.length; i++) {
-  erasedIndexesForDecoding[i] = erasedDataIndexes[i] + numParityUnits;
+  erasedIndexesForDecoding[idx ++] = erasedDataIndexes[i] + numParityUnits;
 }
+
 return erasedIndexesForDecoding;
   }
 
@@ -116,30 +120,23 @@ public abstract class TestCoderBase {
   }
 
   /**
-   * Have a copy of the data chunks that's to be erased thereafter. The copy
-   * will be used to compare and verify with the to be recovered chunks.
+   * Erase chunks to test the recovering of them. Before erasure clone them
+   * first so could return them.
* @param dataChunks
-   * @return
+   * @return clone of erased chunks
*/
-  protected ECChunk[] copyDataChunksToErase(ECChunk[] dataChunks) {
-ECChunk[] copiedChunks = new ECChunk[erasedDataIndexes.length];
-
-int j = 0;
-for (int i = 0; i  erasedDataIndexes.length; i++) {
-  copiedChunks[j 

[22/50] hadoop git commit: HDFS-8308. Erasure Coding: NameNode may get blocked in waitForLoadingFSImage() when loading editlog. Contributed by Jing Zhao.

2015-05-18 Thread zhz
HDFS-8308. Erasure Coding: NameNode may get blocked in waitForLoadingFSImage() 
when loading editlog. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/889529b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/889529b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/889529b9

Branch: refs/heads/HDFS-7285
Commit: 889529b93d425f97ff6e68158224c499a8bfd904
Parents: bf2d0ac
Author: Jing Zhao ji...@apache.org
Authored: Thu Apr 30 19:42:29 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:50 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../namenode/ErasureCodingZoneManager.java  |  3 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  4 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java | 12 
 .../hadoop/hdfs/TestErasureCodingZones.java |  6 +-
 .../server/namenode/TestAddStripedBlocks.java   | 61 ++--
 6 files changed, 52 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/889529b9/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 3c75152..596bbcf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -152,3 +152,6 @@
 
 HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of 
 datastreamer threads. (Rakesh R via Zhe Zhang)
+
+HDFS-8308. Erasure Coding: NameNode may get blocked in 
waitForLoadingFSImage()
+when loading editlog. (jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/889529b9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
index 8cda289..14d4e29 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
@@ -79,7 +79,8 @@ public class ErasureCodingZoneManager {
   for (XAttr xAttr : xAttrs) {
 if (XATTR_ERASURECODING_ZONE.equals(XAttrHelper.getPrefixName(xAttr))) 
{
   String schemaName = new String(xAttr.getValue());
-  ECSchema schema = dir.getFSNamesystem().getECSchema(schemaName);
+  ECSchema schema = dir.getFSNamesystem().getSchemaManager()
+  .getSchema(schemaName);
   return new ECZoneInfo(inode.getFullPathName(), schema);
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/889529b9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 7aad42f..2bf89db 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -7712,9 +7712,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 
   /**
* Create an erasure coding zone on directory src.
-   * @param schema  ECSchema for the erasure coding zone
-   * @param src the path of a directory which will be the root of the
+   * @param srcArg  the path of a directory which will be the root of the
*erasure coding zone. The directory must be empty.
+   * @param schema  ECSchema for the erasure coding zone
*
* @throws AccessControlException  if the caller is not the superuser.
* @throws UnresolvedLinkException if the path can't be resolved.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/889529b9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
index 

[36/50] hadoop git commit: HDFS-8355. Erasure Coding: Refactor BlockInfo and BlockInfoUnderConstruction. Contributed by Tsz Wo Nicholas Sze.

2015-05-18 Thread zhz
HDFS-8355. Erasure Coding: Refactor BlockInfo and BlockInfoUnderConstruction. 
Contributed by Tsz Wo Nicholas Sze.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/053da55f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/053da55f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/053da55f

Branch: refs/heads/HDFS-7285
Commit: 053da55fcec30289606248f33beacb80ca84fe24
Parents: 2cdb879
Author: Jing Zhao ji...@apache.org
Authored: Fri May 8 13:56:56 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:00 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../hdfs/server/blockmanagement/BlockInfo.java  | 95 +---
 .../BlockInfoContiguousUnderConstruction.java   | 27 ++
 .../BlockInfoStripedUnderConstruction.java  | 25 ++
 .../BlockInfoUnderConstruction.java | 27 ++
 .../server/blockmanagement/BlockManager.java| 51 ---
 .../hdfs/server/namenode/FSNamesystem.java  | 20 ++---
 7 files changed, 95 insertions(+), 153 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/053da55f/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index ab8a748..c7d01c7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -192,3 +192,6 @@
 
 HDFS-8289. Erasure Coding: add ECSchema to HdfsFileStatus. (Yong Zhang via
 jing9)
+
+HDFS-8355. Erasure Coding: Refactor BlockInfo and 
BlockInfoUnderConstruction.
+(Tsz Wo Nicholas Sze via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/053da55f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index 8b71925..aebfbb1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -17,13 +17,12 @@
  */
 package org.apache.hadoop.hdfs.server.blockmanagement;
 
+import java.util.LinkedList;
+
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.util.LightWeightGSet;
 
-import java.io.IOException;
-import java.util.LinkedList;
-
 /**
  * For a given block (or an erasure coding block group), BlockInfo class
  * maintains 1) the {@link BlockCollection} it is part of, and 2) datanodes
@@ -336,94 +335,4 @@ public abstract class BlockInfo extends Block
   public void setNext(LightWeightGSet.LinkedElement next) {
 this.nextLinkedElement = next;
   }
-
-  static BlockInfo copyOf(BlockInfo b) {
-if (!b.isStriped()) {
-  return new BlockInfoContiguous((BlockInfoContiguous) b);
-} else {
-  return new BlockInfoStriped((BlockInfoStriped) b);
-}
-  }
-
-  static BlockInfo convertToCompleteBlock(BlockInfo blk) throws IOException {
-if (blk instanceof BlockInfoContiguousUnderConstruction) {
-  return ((BlockInfoContiguousUnderConstruction) blk)
-  .convertToCompleteBlock();
-} else if (blk instanceof BlockInfoStripedUnderConstruction) {
-  return ((BlockInfoStripedUnderConstruction) 
blk).convertToCompleteBlock();
-} else {
-  return blk;
-}
-  }
-
-  static void commitBlock(BlockInfo blockInfo, Block reported)
-  throws IOException {
-if (blockInfo instanceof BlockInfoContiguousUnderConstruction) {
-  ((BlockInfoContiguousUnderConstruction) blockInfo).commitBlock(reported);
-} else if (blockInfo instanceof BlockInfoStripedUnderConstruction) {
-  ((BlockInfoStripedUnderConstruction) blockInfo).commitBlock(reported);
-}
-  }
-
-  static void addReplica(BlockInfo ucBlock, DatanodeStorageInfo storageInfo,
-  Block reportedBlock, HdfsServerConstants.ReplicaState reportedState) {
-assert ucBlock instanceof BlockInfoContiguousUnderConstruction ||
-ucBlock instanceof BlockInfoStripedUnderConstruction;
-if (ucBlock instanceof BlockInfoContiguousUnderConstruction) {
-  ((BlockInfoContiguousUnderConstruction) ucBlock).addReplicaIfNotPresent(
-  storageInfo, reportedBlock, reportedState);
-} else { // StripedUC
-  

[46/50] hadoop git commit: HDFS-8352. Erasure Coding: test webhdfs read write stripe file. (waltersu4549)

2015-05-18 Thread zhz
HDFS-8352. Erasure Coding: test webhdfs read write stripe file. (waltersu4549)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/84562b44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/84562b44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/84562b44

Branch: refs/heads/HDFS-7285
Commit: 84562b444dd2eebe1fd779a1b537b65b95c9e541
Parents: 1437ab8
Author: waltersu4549 waltersu4...@apache.org
Authored: Mon May 18 19:10:37 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:03 2015 -0700

--
 .../hadoop/hdfs/TestWriteReadStripedFile.java   | 267 ++-
 1 file changed, 148 insertions(+), 119 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/84562b44/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
index 57d6eb9..f78fb7a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
@@ -21,9 +21,13 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.web.ByteRangeInputStream;
+import org.apache.hadoop.hdfs.web.WebHdfsConstants;
+import org.apache.hadoop.hdfs.web.WebHdfsTestUtil;
 import org.apache.hadoop.io.erasurecode.rawcoder.RSRawDecoder;
 import org.junit.AfterClass;
 import org.junit.Assert;
@@ -33,23 +37,26 @@ import org.junit.Test;
 import java.io.EOFException;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.Random;
 
 public class TestWriteReadStripedFile {
   private static int dataBlocks = HdfsConstants.NUM_DATA_BLOCKS;
   private static int parityBlocks = HdfsConstants.NUM_PARITY_BLOCKS;
 
-
-  private static DistributedFileSystem fs;
   private final static int cellSize = HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
   private final static int stripesPerBlock = 4;
   static int blockSize = cellSize * stripesPerBlock;
   static int numDNs = dataBlocks + parityBlocks + 2;
 
   private static MiniDFSCluster cluster;
+  private static Configuration conf;
+  private static FileSystem fs;
+
+  private static Random r= new Random();
 
   @BeforeClass
   public static void setup() throws IOException {
-Configuration conf = new Configuration();
+conf = new Configuration();
 conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDNs).build();
 cluster.getFileSystem().getClient().createErasureCodingZone(/, null);
@@ -134,7 +141,7 @@ public class TestWriteReadStripedFile {
   @Test
   public void testFileMoreThanABlockGroup2() throws IOException {
 testOneFileUsingDFSStripedInputStream(/MoreThanABlockGroup2,
-blockSize * dataBlocks + cellSize+ 123);
+blockSize * dataBlocks + cellSize + 123);
   }
 
 
@@ -171,7 +178,7 @@ public class TestWriteReadStripedFile {
   }
 
   private void assertSeekAndRead(FSDataInputStream fsdis, int pos,
-  int writeBytes) throws IOException {
+ int writeBytes) throws IOException {
 fsdis.seek(pos);
 byte[] buf = new byte[writeBytes];
 int readLen = readAll(fsdis, buf);
@@ -182,147 +189,169 @@ public class TestWriteReadStripedFile {
 }
   }
 
-  private void testOneFileUsingDFSStripedInputStream(String src, int 
writeBytes)
+  private void testOneFileUsingDFSStripedInputStream(String src, int 
fileLength)
   throws IOException {
-Path testPath = new Path(src);
-final byte[] bytes = generateBytes(writeBytes);
-DFSTestUtil.writeFile(fs, testPath, new String(bytes));
 
-//check file length
-FileStatus status = fs.getFileStatus(testPath);
-long fileLength = status.getLen();
+final byte[] expected = generateBytes(fileLength);
+Path srcPath = new Path(src);
+DFSTestUtil.writeFile(fs, srcPath, new String(expected));
+
+verifyLength(fs, srcPath, fileLength);
+
+byte[] smallBuf = new byte[1024];
+byte[] largeBuf = new byte[fileLength + 100];
+verifyPread(fs, srcPath, fileLength, expected, largeBuf);
+
+verifyStatefulRead(fs, srcPath, fileLength, expected, 

[35/50] hadoop git commit: HDFS-8289. Erasure Coding: add ECSchema to HdfsFileStatus. Contributed by Yong Zhang.

2015-05-18 Thread zhz
HDFS-8289. Erasure Coding: add ECSchema to HdfsFileStatus. Contributed by Yong 
Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cdb879c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cdb879c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cdb879c

Branch: refs/heads/HDFS-7285
Commit: 2cdb879cd31558984101ecc5296ac93bc3a6260b
Parents: 282349e
Author: Jing Zhao ji...@apache.org
Authored: Thu May 7 11:52:49 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:00 2015 -0700

--
 .../hadoop/hdfs/protocol/HdfsFileStatus.java| 10 ++-
 .../protocol/SnapshottableDirectoryStatus.java  |  2 +-
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  |  2 +-
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  6 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  2 +-
 .../hadoop/hdfs/DFSStripedInputStream.java  | 13 ++--
 .../hadoop/hdfs/DFSStripedOutputStream.java |  4 +-
 .../hdfs/protocol/HdfsLocatedFileStatus.java|  5 +-
 .../ClientNamenodeProtocolTranslatorPB.java |  2 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 10 ++-
 .../server/namenode/FSDirStatAndListingOp.java  | 16 +++--
 .../src/main/proto/erasurecoding.proto  | 19 --
 .../hadoop-hdfs/src/main/proto/hdfs.proto   | 22 +++
 .../hadoop/hdfs/TestDFSClientRetries.java   |  4 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 16 +++--
 .../apache/hadoop/hdfs/TestEncryptionZones.java |  2 +-
 .../hadoop/hdfs/TestFileStatusWithECschema.java | 65 
 .../java/org/apache/hadoop/hdfs/TestLease.java  |  4 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   |  2 +-
 .../apache/hadoop/hdfs/web/TestJsonUtil.java|  2 +-
 21 files changed, 149 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cdb879c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
index 34f429a..f07973a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.io.erasurecode.ECSchema;
 
 /** Interface that represents the over the wire information for a file.
  */
@@ -48,6 +49,8 @@ public class HdfsFileStatus {
 
   private final FileEncryptionInfo feInfo;
 
+  private final ECSchema schema;
+  
   // Used by dir, not including dot and dotdot. Always zero for a regular file.
   private final int childrenNum;
   private final byte storagePolicy;
@@ -73,7 +76,7 @@ public class HdfsFileStatus {
   long blocksize, long modification_time, long access_time,
   FsPermission permission, String owner, String group, byte[] symlink,
   byte[] path, long fileId, int childrenNum, FileEncryptionInfo feInfo,
-  byte storagePolicy) {
+  byte storagePolicy, ECSchema schema) {
 this.length = length;
 this.isdir = isdir;
 this.block_replication = (short)block_replication;
@@ -93,6 +96,7 @@ public class HdfsFileStatus {
 this.childrenNum = childrenNum;
 this.feInfo = feInfo;
 this.storagePolicy = storagePolicy;
+this.schema = schema;
   }
 
   /**
@@ -250,6 +254,10 @@ public class HdfsFileStatus {
 return feInfo;
   }
 
+  public ECSchema getECSchema() {
+return schema;
+  }
+
   public final int getChildrenNum() {
 return childrenNum;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cdb879c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
index ac19d44..813ea26 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
+++ 

[26/50] hadoop git commit: HDFS-8137. Send the EC schema to DataNode via EC encoding/recovering command. Contributed by Uma Maheswara Rao G

2015-05-18 Thread zhz
HDFS-8137. Send the EC schema to DataNode via EC encoding/recovering command. 
Contributed by Uma Maheswara Rao G


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e85cd187
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e85cd187
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e85cd187

Branch: refs/heads/HDFS-7285
Commit: e85cd187a5ba376ca85e2cf933465ea27f40fdcf
Parents: da02437
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Tue May 5 11:22:52 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:51 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  2 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  6 ++-
 .../server/blockmanagement/BlockManager.java| 22 +-
 .../blockmanagement/DatanodeDescriptor.java | 16 
 .../hdfs/server/namenode/FSNamesystem.java  | 43 +++-
 .../hadoop/hdfs/server/namenode/Namesystem.java | 14 ++-
 .../server/protocol/BlockECRecoveryCommand.java | 14 ++-
 .../src/main/proto/erasurecoding.proto  |  1 +
 .../hadoop/hdfs/protocolPB/TestPBHelper.java| 21 --
 9 files changed, 102 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e85cd187/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 77272e7..faec023 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -164,3 +164,5 @@
 
 HDFS-8281. Erasure Coding: implement parallel stateful reading for striped 
layout.
 (jing9)
+
+HDFS-8137. Send the EC schema to DataNode via EC encoding/recovering 
command(umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e85cd187/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 3cd3e03..e230232 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -3191,8 +3191,10 @@ public class PBHelper {
   liveBlkIndices[i] = liveBlockIndicesList.get(i).shortValue();
 }
 
+ECSchema ecSchema = 
convertECSchema(blockEcRecoveryInfoProto.getEcSchema());
+
 return new BlockECRecoveryInfo(block, sourceDnInfos, targetDnInfos,
-targetStorageUuids, convertStorageTypes, liveBlkIndices);
+targetStorageUuids, convertStorageTypes, liveBlkIndices, ecSchema);
   }
 
   public static BlockECRecoveryInfoProto convertBlockECRecoveryInfo(
@@ -3217,6 +3219,8 @@ public class PBHelper {
 short[] liveBlockIndices = blockEcRecoveryInfo.getLiveBlockIndices();
 builder.addAllLiveBlockIndices(convertIntArray(liveBlockIndices));
 
+builder.setEcSchema(convertECSchema(blockEcRecoveryInfo.getECSchema()));
+
 return builder.build();
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e85cd187/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 1e50348..b55c654 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -65,7 +65,6 @@ import 
org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
 import org.apache.hadoop.hdfs.server.blockmanagement.CorruptReplicasMap.Reason;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.AddBlockResult;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.PendingDataNodeMessages.ReportedBlockInfo;
-import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
@@ -83,7 +82,10 @@ import 

[04/50] hadoop git commit: HDFS-8181. createErasureCodingZone sets retryCache state as false always (Contributed by Uma Maheswara Rao G)

2015-05-18 Thread zhz
HDFS-8181. createErasureCodingZone sets retryCache state as false always 
(Contributed by Uma Maheswara Rao G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4c37b057
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4c37b057
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4c37b057

Branch: refs/heads/HDFS-7285
Commit: 4c37b05720143fb9c44fb5af22f42b77ed6a962b
Parents: d0bc27d
Author: Vinayakumar B vinayakum...@apache.org
Authored: Mon Apr 20 15:04:49 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:46 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  | 14 ++
 .../hdfs/server/namenode/NameNodeRpcServer.java   |  1 +
 2 files changed, 15 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4c37b057/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 40517e7..c8dbf08 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -83,10 +83,24 @@
 
 HDFS-7349. Support DFS command for the EC encoding (vinayakumarb)
 
+HDFS-8120. Erasure coding: created util class to analyze striped block 
groups.
+(Contributed by Zhe Zhang and Li Bo via Jing Zhao)
+
 HDFS-7994. Detect if resevered EC Block ID is already used during namenode
 startup. (Hui Zheng via szetszwo)
 
 HDFS-8167. BlockManager.addBlockCollectionWithCheck should check if the 
block is a striped block. (Hui Zheng via zhz).
 
+HDFS-8166. DFSStripedOutputStream should not create empty blocks. (Jing 
Zhao)
+
+HDFS-7937. Erasure Coding: INodeFile quota computation unit tests.
+(Kai Sasaki via Jing Zhao)
+
+HDFS-8145. Fix the editlog corruption exposed by failed 
TestAddStripedBlocks.
+(Jing Zhao)
+
 HDFS-8146. Protobuf changes for BlockECRecoveryCommand and its fields for
 making it ready for transfer to DN (Uma Maheswara Rao G via vinayakumarb)
+
+HDFS-8181. createErasureCodingZone sets retryCache state as false always
+(Uma Maheswara Rao G via vinayakumarb)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4c37b057/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index 8217907..dcf0607 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
@@ -1834,6 +1834,7 @@ class NameNodeRpcServer implements NamenodeProtocols {
 boolean success = false;
 try {
   namesystem.createErasureCodingZone(src, schema, cacheEntry != null);
+  success = true;
 } finally {
   RetryCache.setState(cacheEntry, success);
 }



[49/50] hadoop git commit: HADOOP-11566. Add tests and fix for erasure coders to recover erased parity units. Contributed by Kai Zheng.

2015-05-18 Thread zhz
HADOOP-11566. Add tests and fix for erasure coders to recover erased parity 
units. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a37e2144
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a37e2144
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a37e2144

Branch: refs/heads/HDFS-7285
Commit: a37e214438f2d594f082d3d4dbc873bb8708c29b
Parents: 6e2ccde
Author: Zhe Zhang z...@apache.org
Authored: Mon May 18 10:13:03 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:13:03 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  3 ++
 .../apache/hadoop/io/erasurecode/ECChunk.java   | 17 ++-
 .../coder/AbstractErasureDecoder.java   | 13 --
 .../hadoop/io/erasurecode/TestCoderBase.java| 37 +++
 .../erasurecode/coder/TestErasureCoderBase.java | 37 +++
 .../erasurecode/coder/TestRSErasureCoder.java   | 48 +++-
 .../io/erasurecode/coder/TestXORCoder.java  |  6 +--
 .../io/erasurecode/rawcoder/TestRSRawCoder.java | 37 +--
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  2 +-
 .../erasurecode/rawcoder/TestXORRawCoder.java   | 11 -
 10 files changed, 134 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a37e2144/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index a152e31..34dfc9e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -48,3 +48,6 @@
 HADOOP-11921. Enhance tests for erasure coders. (Kai Zheng via Zhe Zhang)
 
 HADOOP-11920. Refactor some codes for erasure coders. (Kai Zheng via Zhe 
Zhang)
+
+HADOOP-11566. Add tests and fix for erasure coders to recover erased 
parity 
+units. (Kai Zheng via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a37e2144/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index 436e13e..69a8343 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -58,8 +58,14 @@ public class ECChunk {
   public static ByteBuffer[] toBuffers(ECChunk[] chunks) {
 ByteBuffer[] buffers = new ByteBuffer[chunks.length];
 
+ECChunk chunk;
 for (int i = 0; i  chunks.length; i++) {
-  buffers[i] = chunks[i].getBuffer();
+  chunk = chunks[i];
+  if (chunk == null) {
+buffers[i] = null;
+  } else {
+buffers[i] = chunk.getBuffer();
+  }
 }
 
 return buffers;
@@ -75,8 +81,15 @@ public class ECChunk {
 byte[][] bytesArr = new byte[chunks.length][];
 
 ByteBuffer buffer;
+ECChunk chunk;
 for (int i = 0; i  chunks.length; i++) {
-  buffer = chunks[i].getBuffer();
+  chunk = chunks[i];
+  if (chunk == null) {
+bytesArr[i] = null;
+continue;
+  }
+
+  buffer = chunk.getBuffer();
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a37e2144/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
index cd31294..6437236 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
@@ -60,16 +60,21 @@ public abstract class AbstractErasureDecoder extends 
AbstractErasureCoder {
   }
 
   /**
-   * Which blocks were erased ? We only care data blocks here. Sub-classes can
-   * override this behavior.
+   * Which blocks were erased ?
* @param blockGroup
* @return output blocks to recover
*/
   protected ECBlock[] 

[44/50] hadoop git commit: HDFS-8364. Erasure coding: fix some minor bugs in EC CLI (Contributed by Walter Su)

2015-05-18 Thread zhz
HDFS-8364. Erasure coding: fix some minor bugs in EC CLI (Contributed by Walter 
Su)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8bc4adb4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8bc4adb4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8bc4adb4

Branch: refs/heads/HDFS-7285
Commit: 8bc4adb4a3a3e9c06653ef72db722919af1f6eff
Parents: 139c0a9
Author: Vinayakumar B vinayakum...@apache.org
Authored: Wed May 13 12:43:39 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:02:02 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 ++
 .../hadoop-hdfs/src/main/bin/hdfs   |  1 +
 .../hdfs/tools/erasurecode/ECCommand.java   | 12 ---
 .../test/resources/testErasureCodingConf.xml| 35 
 4 files changed, 47 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bc4adb4/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 0945d72..190ddd6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -209,3 +209,6 @@
 
 HDFS-8195. Erasure coding: Fix file quota change when we complete/commit 
 the striped blocks. (Takuya Fukudome via zhz)
+
+HDFS-8364. Erasure coding: fix some minor bugs in EC CLI
+(Walter Su via vinayakumarb)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bc4adb4/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
index 84c79b8..5ee7f4d 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
@@ -28,6 +28,7 @@ function hadoop_usage
   echo   datanode run a DFS datanode
   echo   dfs  run a filesystem command on the file system
   echo   dfsadmin run a DFS admin client
+  echo   erasurecode  configure HDFS erasure coding zones
   echo   fetchdt  fetch a delegation token from the NameNode
   echo   fsck run a DFS filesystem checking utility
   echo   getconf  get config values from configuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bc4adb4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
index d53844d..2b6a6a5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCommand.java
@@ -135,7 +135,7 @@ public abstract class ECCommand extends Command {
 out.println(EC Zone created successfully at  + item.path);
   } catch (IOException e) {
 throw new IOException(Unable to create EC zone for the path 
-+ item.path, e);
++ item.path + .  + e.getMessage());
   }
 }
   }
@@ -165,10 +165,14 @@ public abstract class ECCommand extends Command {
   DistributedFileSystem dfs = (DistributedFileSystem) item.fs;
   try {
 ErasureCodingZoneInfo ecZoneInfo = 
dfs.getErasureCodingZoneInfo(item.path);
-out.println(ecZoneInfo.toString());
+if (ecZoneInfo != null) {
+  out.println(ecZoneInfo.toString());
+} else {
+  out.println(Path  + item.path +  is not in EC zone);
+}
   } catch (IOException e) {
-throw new IOException(Unable to create EC zone for the path 
-+ item.path, e);
+throw new IOException(Unable to get EC zone for the path 
++ item.path + .  + e.getMessage());
   }
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bc4adb4/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
index b7b29d3..66892f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
+++ 

[07/50] hadoop git commit: HDFS-8233. Fix DFSStripedOutputStream#getCurrentBlockGroupBytes when the last stripe is at the block group boundary. Contributed by Jing Zhao.

2015-05-18 Thread zhz
HDFS-8233. Fix DFSStripedOutputStream#getCurrentBlockGroupBytes when the last 
stripe is at the block group boundary. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d00ff693
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d00ff693
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d00ff693

Branch: refs/heads/HDFS-7285
Commit: d00ff69376e495f6e09f54c77412cf5802e7d472
Parents: b3cdbc8
Author: Jing Zhao ji...@apache.org
Authored: Thu Apr 23 15:43:04 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:47 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  5 +-
 .../hadoop/hdfs/DFSStripedOutputStream.java | 51 +---
 .../hadoop/hdfs/TestDFSStripedOutputStream.java |  6 +++
 3 files changed, 34 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d00ff693/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 8977c46..48791b1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -121,4 +121,7 @@
 schema. (Kai Zheng via Zhe Zhang)
 
 HDFS-8136. Client gets and uses EC schema when reads and writes a stripping
-file. (Kai Sasaki via Kai Zheng)
\ No newline at end of file
+file. (Kai Sasaki via Kai Zheng)
+
+HDFS-8233. Fix DFSStripedOutputStream#getCurrentBlockGroupBytes when the 
last
+stripe is at the block group boundary. (jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d00ff693/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index eeb9d7e..245dfc1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -36,7 +36,6 @@ import org.apache.hadoop.hdfs.protocol.ECInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
-import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 import org.apache.hadoop.io.erasurecode.rawcoder.RSRawEncoder;
 import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder;
 import org.apache.hadoop.util.DataChecksum;
@@ -278,14 +277,6 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
 return numDataBlocks * cellSize;
   }
 
-  private long getCurrentBlockGroupBytes() {
-long sum = 0;
-for (int i = 0; i  numDataBlocks; i++) {
-  sum += streamers.get(i).getBytesCurBlock();
-}
-return sum;
-  }
-
   private void notSupported(String headMsg)
   throws IOException{
   throw new IOException(
@@ -347,37 +338,43 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
 }
   }
 
+  /**
+   * Simply add bytesCurBlock together. Note that this result is not accurately
+   * the size of the block group.
+   */
+  private long getCurrentSumBytes() {
+long sum = 0;
+for (int i = 0; i  numDataBlocks; i++) {
+  sum += streamers.get(i).getBytesCurBlock();
+}
+return sum;
+  }
+
   private void writeParityCellsForLastStripe() throws IOException {
-final long currentBlockGroupBytes = getCurrentBlockGroupBytes();
-long parityBlkSize = StripedBlockUtil.getInternalBlockLength(
-currentBlockGroupBytes, cellSize, numDataBlocks,
-numDataBlocks + 1);
-if (parityBlkSize == 0 || currentBlockGroupBytes % stripeDataSize() == 0) {
+final long currentBlockGroupBytes = getCurrentSumBytes();
+if (currentBlockGroupBytes % stripeDataSize() == 0) {
   return;
 }
-int parityCellSize = parityBlkSize % cellSize == 0 ? cellSize :
-(int) (parityBlkSize % cellSize);
+long firstCellSize = getLeadingStreamer().getBytesCurBlock() % cellSize;
+long parityCellSize = firstCellSize  0  firstCellSize  cellSize ?
+firstCellSize : cellSize;
 
 for (int i = 0; i  numAllBlocks; i++) {
-  long internalBlkLen = StripedBlockUtil.getInternalBlockLength(
-  currentBlockGroupBytes, cellSize, numDataBlocks, i);
   // Pad zero bytes to make all cells exactly the size of parityCellSize
   // If internal 

[27/50] hadoop git commit: HDFS-8281. Erasure Coding: implement parallel stateful reading for striped layout. Contributed by Jing Zhao.

2015-05-18 Thread zhz
HDFS-8281. Erasure Coding: implement parallel stateful reading for striped 
layout. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/da024373
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/da024373
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/da024373

Branch: refs/heads/HDFS-7285
Commit: da024373ca3ea5313b161721fa46cb8679bfa11c
Parents: 2ad183e
Author: Jing Zhao ji...@apache.org
Authored: Mon May 4 14:44:58 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon May 18 10:01:51 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  26 +++
 .../hadoop/hdfs/DFSStripedInputStream.java  | 217 +--
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  34 ++-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  50 -
 .../hadoop/hdfs/TestPlanReadPortions.java   |   4 +-
 6 files changed, 246 insertions(+), 88 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/da024373/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index e30b2ed..77272e7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -161,3 +161,6 @@
 
 HDFS-8316. Erasure coding: refactor EC constants to be consistent with 
HDFS-8249.
 (Zhe Zhang via jing9)
+
+HDFS-8281. Erasure Coding: implement parallel stateful reading for striped 
layout.
+(jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/da024373/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index bef4da0..ca799fa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -716,6 +716,16 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   interface ReaderStrategy {
 public int doRead(BlockReader blockReader, int off, int len)
 throws ChecksumException, IOException;
+
+/**
+ * Copy data from the src ByteBuffer into the read buffer.
+ * @param src The src buffer where the data is copied from
+ * @param offset Useful only when the ReadStrategy is based on a byte 
array.
+ *   Indicate the offset of the byte array for copy.
+ * @param length Useful only when the ReadStrategy is based on a byte 
array.
+ *   Indicate the length of the data to copy.
+ */
+public int copyFrom(ByteBuffer src, int offset, int length);
   }
 
   protected void updateReadStatistics(ReadStatistics readStatistics,
@@ -749,6 +759,13 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   updateReadStatistics(readStatistics, nRead, blockReader);
   return nRead;
 }
+
+@Override
+public int copyFrom(ByteBuffer src, int offset, int length) {
+  ByteBuffer writeSlice = src.duplicate();
+  writeSlice.get(buf, offset, length);
+  return length;
+}
   }
 
   /**
@@ -782,6 +799,15 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
 }
   } 
 }
+
+@Override
+public int copyFrom(ByteBuffer src, int offset, int length) {
+  ByteBuffer writeSlice = src.duplicate();
+  int remaining = Math.min(buf.remaining(), writeSlice.remaining());
+  writeSlice.limit(writeSlice.position() + remaining);
+  buf.put(writeSlice);
+  return remaining;
+}
   }
 
   /* This is a used by regular read() and handles ChecksumExceptions.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/da024373/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 0dc98fd..13c4743 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -17,6 +17,7 @@
  */
 package 

hadoop git commit: HADOOP-11884. test-patch.sh should pull the real findbugs version (Kengo Seki via aw)

2015-05-18 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ea875939f - d8920


HADOOP-11884. test-patch.sh should pull the real findbugs version  (Kengo Seki 
via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d892
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d892
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d892

Branch: refs/heads/branch-2
Commit: d8920e861625d4996c2bd070c1cb2d8103ac
Parents: ea87593
Author: Allen Wittenauer a...@apache.org
Authored: Mon May 18 16:08:49 2015 +
Committer: Allen Wittenauer a...@apache.org
Committed: Mon May 18 16:09:34 2015 +

--
 dev-support/test-patch.sh   | 5 +++--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d892/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index 9cc5bb0..00a638c 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -1859,8 +1859,6 @@ function check_findbugs
 return 1
   fi
 
-  findbugs_version=$(${FINDBUGS_HOME}/bin/findbugs -version)
-
   for module in ${modules}
   do
 pushd ${module} /dev/null
@@ -1872,6 +1870,9 @@ function check_findbugs
 popd /dev/null
   done
 
+  #shellcheck disable=SC2016
+  findbugs_version=$(${AWK} 'match($0, /findbugs-maven-plugin:[^:]*:findbugs/) 
{ print substr($0, RSTART + 22, RLENGTH - 31); exit }' 
${PATCH_DIR}/patchFindBugsOutput${module_suffix}.txt)
+
   if [[ ${rc} -ne 0 ]]; then
 add_jira_table -1 findbugs The patch appears to cause Findbugs (version 
${findbugs_version}) to fail.
 return 1

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d892/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 117bca2..ec11a02 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -104,6 +104,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11939. Deprecate DistCpV1 and Logalyzer.
 (Brahma Reddy Battula via aajisaka)
 
+HADOOP-11884. test-patch.sh should pull the real findbugs version
+(Kengo Seki via aw)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp



hadoop git commit: HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)

2015-05-18 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 10c922b5c - baf782f35


HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/baf782f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/baf782f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/baf782f3

Branch: refs/heads/branch-2
Commit: baf782f353a16ca02da63b54944973eb099204ba
Parents: 10c922b
Author: Allen Wittenauer a...@apache.org
Authored: Mon May 18 17:06:31 2015 +
Committer: Allen Wittenauer a...@apache.org
Committed: Mon May 18 17:06:57 2015 +

--
 dev-support/test-patch.sh   | 27 +---
 hadoop-common-project/hadoop-common/CHANGES.txt |  2 ++
 2 files changed, 25 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/baf782f3/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index ae74c5b..57fd657 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -40,6 +40,9 @@ function setup_defaults
   BASEDIR=$(pwd)
   RELOCATE_PATCH_DIR=false
 
+  USER_PLUGIN_DIR=
+  LOAD_SYSTEM_PLUGINS=true
+
   FINDBUGS_HOME=${FINDBUGS_HOME:-}
   ECLIPSE_HOME=${ECLIPSE_HOME:-}
   BUILD_NATIVE=${BUILD_NATIVE:-true}
@@ -586,9 +589,11 @@ function hadoop_usage
   echo --modulelist=listSpecify additional modules to test (comma 
delimited)
   echo --offline  Avoid connecting to the Internet
   echo --patch-dir=dir  The directory for working and output files 
(default '/tmp/${PROJECT_NAME}-test-patch/pid')
+  echo --plugins=dirA directory of user provided plugins. see 
test-patch.d for examples (default empty)
   echo --project=name   The short name for project currently using 
test-patch (default 'hadoop')
   echo --resetrepoForcibly clean the repo
   echo --run-testsRun all relevant tests below the base directory
+  echo --skip-system-plugins  Do not load plugins from ${BINDIR}/test-patch.d
   echo --testlist=list  Specify which subsystem tests to use (comma 
delimited)
 
   echo Shell binary overrides:
@@ -706,6 +711,9 @@ function parse_args
   --patch-dir=*)
 USER_PATCH_DIR=${i#*=}
   ;;
+  --plugins=*)
+USER_PLUGIN_DIR=${i#*=}
+  ;;
   --project=*)
 PROJECT_NAME=${i#*=}
   ;;
@@ -723,6 +731,9 @@ function parse_args
   --run-tests)
 RUN_TESTS=true
   ;;
+  --skip-system-plugins)
+LOAD_SYSTEM_PLUGINS=false
+  ;;
   --testlist=*)
 testlist=${i#*=}
 testlist=${testlist//,/ }
@@ -2523,17 +2534,25 @@ function runtests
   done
 }
 
-## @description  Import content from test-patch.d
+## @description  Import content from test-patch.d and optionally
+## @description  from user provided plugin directory
 ## @audience private
 ## @stabilityevolving
 ## @replaceable  no
 function importplugins
 {
   local i
-  local files
+  local files=()
+
+  if [[ ${LOAD_SYSTEM_PLUGINS} == true ]]; then
+if [[ -d ${BINDIR}/test-patch.d ]]; then
+  files=(${BINDIR}/test-patch.d/*.sh)
+fi
+  fi
 
-  if [[ -d ${BINDIR}/test-patch.d ]]; then
-files=(${BINDIR}/test-patch.d/*.sh)
+  if [[ -n ${USER_PLUGIN_DIR}  -d ${USER_PLUGIN_DIR} ]]; then
+hadoop_debug Loading user provided plugins from ${USER_PLUGIN_DIR}
+files=(${files[@]} ${USER_PLUGIN_DIR}/*.sh)
   fi
 
   for i in ${files[@]}; do

http://git-wip-us.apache.org/repos/asf/hadoop/blob/baf782f3/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 6ca3f1e..ab7947f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -17,6 +17,8 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11843. Make setting up the build environment easier.
 (Niels Basjes via cnauroth)
 
+HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)
+
   IMPROVEMENTS
 
 HADOOP-6842. hadoop fs -text does not give a useful text representation



hadoop git commit: HADOOP-11944. add option to test-patch to avoid relocating patch process directory (Sean Busbey via aw)

2015-05-18 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 182d86dac - bcc17866d


HADOOP-11944. add option to test-patch to avoid relocating patch process 
directory (Sean Busbey via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bcc17866
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bcc17866
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bcc17866

Branch: refs/heads/trunk
Commit: bcc17866ddb616e8c70e5aa044becd7a7d1bee35
Parents: 182d86d
Author: Allen Wittenauer a...@apache.org
Authored: Mon May 18 16:13:50 2015 +
Committer: Allen Wittenauer a...@apache.org
Committed: Mon May 18 16:13:50 2015 +

--
 dev-support/test-patch.sh   | 28 +++-
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 2 files changed, 18 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bcc17866/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index 00a638c..ae74c5b 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -38,6 +38,7 @@ function setup_defaults
   HOW_TO_CONTRIBUTE=https://wiki.apache.org/hadoop/HowToContribute;
   JENKINS=false
   BASEDIR=$(pwd)
+  RELOCATE_PATCH_DIR=false
 
   FINDBUGS_HOME=${FINDBUGS_HOME:-}
   ECLIPSE_HOME=${ECLIPSE_HOME:-}
@@ -607,6 +608,7 @@ function hadoop_usage
   echo --eclipse-home=path  Eclipse home directory (default ECLIPSE_HOME 
environment variable)
   echo --jira-cmd=cmd   The 'jira' command to use (default 'jira')
   echo --jira-password=pw   The password for the 'jira' command
+  echo --mv-patch-dir Move the patch-dir into the basedir during 
cleanup.
   echo --wget-cmd=cmd   The 'wget' command to use (default 'wget')
 }
 
@@ -692,6 +694,9 @@ function parse_args
   --mvn-cmd=*)
 MVN=${i#*=}
   ;;
+  --mv-patch-dir)
+RELOCATE_PATCH_DIR=true;
+  ;;
   --offline)
 OFFLINE=true
   ;;
@@ -2323,19 +2328,16 @@ function cleanup_and_exit
 {
   local result=$1
 
-  if [[ ${JENKINS} == true ]] ; then
-if [[ -e ${PATCH_DIR} ]] ; then
-  if [[ -d ${PATCH_DIR} ]]; then
-# if PATCH_DIR is already inside BASEDIR, then
-# there is no need to move it since we assume that
-# Jenkins or whatever already knows where it is at
-# since it told us to put it there!
-relative_patchdir /dev/null
-if [[ $? == 1 ]]; then
-  hadoop_debug mv ${PATCH_DIR} ${BASEDIR}
-  mv ${PATCH_DIR} ${BASEDIR}
-fi
-  fi
+  if [[ ${JENKINS} == true  ${RELOCATE_PATCH_DIR} == true  \
+  -e ${PATCH_DIR}  -d ${PATCH_DIR} ]] ; then
+# if PATCH_DIR is already inside BASEDIR, then
+# there is no need to move it since we assume that
+# Jenkins or whatever already knows where it is at
+# since it told us to put it there!
+relative_patchdir /dev/null
+if [[ $? == 1 ]]; then
+  hadoop_debug mv ${PATCH_DIR} ${BASEDIR}
+  mv ${PATCH_DIR} ${BASEDIR}
 fi
   fi
   big_console_header Finished build.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bcc17866/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 1c2cdaa..8f66072 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -578,6 +578,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11884. test-patch.sh should pull the real findbugs version
 (Kengo Seki via aw)
 
+HADOOP-11944. add option to test-patch to avoid relocating patch process
+directory (Sean Busbey via aw)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp



hadoop git commit: HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)

2015-05-18 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk bcc17866d - 060c84ea8


HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/060c84ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/060c84ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/060c84ea

Branch: refs/heads/trunk
Commit: 060c84ea86257e3dea2f834aac7ae27b1456c434
Parents: bcc1786
Author: Allen Wittenauer a...@apache.org
Authored: Mon May 18 17:06:31 2015 +
Committer: Allen Wittenauer a...@apache.org
Committed: Mon May 18 17:06:31 2015 +

--
 dev-support/test-patch.sh   | 27 +---
 hadoop-common-project/hadoop-common/CHANGES.txt |  2 ++
 2 files changed, 25 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/060c84ea/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index ae74c5b..57fd657 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -40,6 +40,9 @@ function setup_defaults
   BASEDIR=$(pwd)
   RELOCATE_PATCH_DIR=false
 
+  USER_PLUGIN_DIR=
+  LOAD_SYSTEM_PLUGINS=true
+
   FINDBUGS_HOME=${FINDBUGS_HOME:-}
   ECLIPSE_HOME=${ECLIPSE_HOME:-}
   BUILD_NATIVE=${BUILD_NATIVE:-true}
@@ -586,9 +589,11 @@ function hadoop_usage
   echo --modulelist=listSpecify additional modules to test (comma 
delimited)
   echo --offline  Avoid connecting to the Internet
   echo --patch-dir=dir  The directory for working and output files 
(default '/tmp/${PROJECT_NAME}-test-patch/pid')
+  echo --plugins=dirA directory of user provided plugins. see 
test-patch.d for examples (default empty)
   echo --project=name   The short name for project currently using 
test-patch (default 'hadoop')
   echo --resetrepoForcibly clean the repo
   echo --run-testsRun all relevant tests below the base directory
+  echo --skip-system-plugins  Do not load plugins from ${BINDIR}/test-patch.d
   echo --testlist=list  Specify which subsystem tests to use (comma 
delimited)
 
   echo Shell binary overrides:
@@ -706,6 +711,9 @@ function parse_args
   --patch-dir=*)
 USER_PATCH_DIR=${i#*=}
   ;;
+  --plugins=*)
+USER_PLUGIN_DIR=${i#*=}
+  ;;
   --project=*)
 PROJECT_NAME=${i#*=}
   ;;
@@ -723,6 +731,9 @@ function parse_args
   --run-tests)
 RUN_TESTS=true
   ;;
+  --skip-system-plugins)
+LOAD_SYSTEM_PLUGINS=false
+  ;;
   --testlist=*)
 testlist=${i#*=}
 testlist=${testlist//,/ }
@@ -2523,17 +2534,25 @@ function runtests
   done
 }
 
-## @description  Import content from test-patch.d
+## @description  Import content from test-patch.d and optionally
+## @description  from user provided plugin directory
 ## @audience private
 ## @stabilityevolving
 ## @replaceable  no
 function importplugins
 {
   local i
-  local files
+  local files=()
+
+  if [[ ${LOAD_SYSTEM_PLUGINS} == true ]]; then
+if [[ -d ${BINDIR}/test-patch.d ]]; then
+  files=(${BINDIR}/test-patch.d/*.sh)
+fi
+  fi
 
-  if [[ -d ${BINDIR}/test-patch.d ]]; then
-files=(${BINDIR}/test-patch.d/*.sh)
+  if [[ -n ${USER_PLUGIN_DIR}  -d ${USER_PLUGIN_DIR} ]]; then
+hadoop_debug Loading user provided plugins from ${USER_PLUGIN_DIR}
+files=(${files[@]} ${USER_PLUGIN_DIR}/*.sh)
   fi
 
   for i in ${files[@]}; do

http://git-wip-us.apache.org/repos/asf/hadoop/blob/060c84ea/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8f66072..324434b 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -486,6 +486,8 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11843. Make setting up the build environment easier.
 (Niels Basjes via cnauroth)
 
+HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)
+
   IMPROVEMENTS
 
 HADOOP-6842. hadoop fs -text does not give a useful text representation



[2/2] hadoop git commit: HDFS-8345. Storage policy APIs must be exposed via the FileSystem interface. (Arpit Agarwal)

2015-05-18 Thread arp
HDFS-8345. Storage policy APIs must be exposed via the FileSystem interface. 
(Arpit Agarwal)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9bcb7400
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9bcb7400
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9bcb7400

Branch: refs/heads/branch-2
Commit: 9bcb740049368a3feb4196cde003b842cc89d5d6
Parents: baf782f
Author: Arpit Agarwal a...@apache.org
Authored: Mon May 18 11:36:29 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Mon May 18 11:49:01 2015 -0700

--
 .../apache/hadoop/fs/AbstractFileSystem.java| 27 
 .../apache/hadoop/fs/BlockStoragePolicySpi.java | 72 
 .../java/org/apache/hadoop/fs/FileContext.java  | 33 +
 .../java/org/apache/hadoop/fs/FileSystem.java   | 28 
 .../org/apache/hadoop/fs/FilterFileSystem.java  | 13 
 .../java/org/apache/hadoop/fs/FilterFs.java | 13 
 .../org/apache/hadoop/fs/viewfs/ChRootedFs.java | 14 
 .../org/apache/hadoop/fs/viewfs/ViewFs.java | 14 
 .../org/apache/hadoop/fs/TestHarFileSystem.java |  7 ++
 .../hdfs/protocol/BlockStoragePolicy.java   |  8 ++-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../main/java/org/apache/hadoop/fs/Hdfs.java| 13 
 .../hadoop/hdfs/DistributedFileSystem.java  | 27 +---
 .../apache/hadoop/hdfs/server/mover/Mover.java  |  5 +-
 .../hadoop/hdfs/tools/StoragePolicyAdmin.java   |  5 +-
 .../hadoop/hdfs/TestBlockStoragePolicy.java | 46 +++--
 16 files changed, 308 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bcb7400/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
index 7af5fa7..cb3fb86 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
@@ -23,6 +23,7 @@ import java.lang.reflect.Constructor;
 import java.net.URI;
 import java.net.URISyntaxException;
 import java.util.ArrayList;
+import java.util.Collection;
 import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.List;
@@ -1221,6 +1222,32 @@ public abstract class AbstractFileSystem {
 +  doesn't support deleteSnapshot);
   }
 
+  /**
+   * Set the storage policy for a given file or directory.
+   *
+   * @param path file or directory path.
+   * @param policyName the name of the target storage policy. The list
+   *   of supported Storage policies can be retrieved
+   *   via {@link #getAllStoragePolicies}.
+   */
+  public void setStoragePolicy(final Path path, final String policyName)
+  throws IOException {
+throw new UnsupportedOperationException(getClass().getSimpleName()
++  doesn't support setStoragePolicy);
+  }
+
+  /**
+   * Retrieve all the storage policies supported by this file system.
+   *
+   * @return all storage policies supported by this filesystem.
+   * @throws IOException
+   */
+  public Collection? extends BlockStoragePolicySpi getAllStoragePolicies()
+  throws IOException {
+throw new UnsupportedOperationException(getClass().getSimpleName()
++  doesn't support getAllStoragePolicies);
+  }
+
   @Override //Object
   public int hashCode() {
 return myUri.hashCode();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bcb7400/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
new file mode 100644
index 000..1d6502e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless 

[1/2] hadoop git commit: HDFS-8345. Storage policy APIs must be exposed via the FileSystem interface. (Arpit Agarwal)

2015-05-18 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 baf782f35 - 9bcb74004
  refs/heads/trunk 060c84ea8 - a2190bf15


HDFS-8345. Storage policy APIs must be exposed via the FileSystem interface. 
(Arpit Agarwal)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a2190bf1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a2190bf1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a2190bf1

Branch: refs/heads/trunk
Commit: a2190bf15d25e01fb4b220ba6401ce2f787a5c61
Parents: 060c84e
Author: Arpit Agarwal a...@apache.org
Authored: Mon May 18 11:36:29 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Mon May 18 11:36:29 2015 -0700

--
 .../apache/hadoop/fs/AbstractFileSystem.java| 27 
 .../apache/hadoop/fs/BlockStoragePolicySpi.java | 72 
 .../java/org/apache/hadoop/fs/FileContext.java  | 33 +
 .../java/org/apache/hadoop/fs/FileSystem.java   | 28 
 .../org/apache/hadoop/fs/FilterFileSystem.java  | 13 
 .../java/org/apache/hadoop/fs/FilterFs.java | 13 
 .../org/apache/hadoop/fs/viewfs/ChRootedFs.java | 14 
 .../org/apache/hadoop/fs/viewfs/ViewFs.java | 14 
 .../org/apache/hadoop/fs/TestHarFileSystem.java |  7 ++
 .../hdfs/protocol/BlockStoragePolicy.java   |  8 ++-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../main/java/org/apache/hadoop/fs/Hdfs.java| 13 
 .../hadoop/hdfs/DistributedFileSystem.java  | 27 +---
 .../apache/hadoop/hdfs/server/mover/Mover.java  |  5 +-
 .../hadoop/hdfs/tools/StoragePolicyAdmin.java   |  5 +-
 .../hadoop/hdfs/TestBlockStoragePolicy.java | 64 ++---
 16 files changed, 308 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2190bf1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
index 7af5fa7..cb3fb86 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
@@ -23,6 +23,7 @@ import java.lang.reflect.Constructor;
 import java.net.URI;
 import java.net.URISyntaxException;
 import java.util.ArrayList;
+import java.util.Collection;
 import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.List;
@@ -1221,6 +1222,32 @@ public abstract class AbstractFileSystem {
 +  doesn't support deleteSnapshot);
   }
 
+  /**
+   * Set the storage policy for a given file or directory.
+   *
+   * @param path file or directory path.
+   * @param policyName the name of the target storage policy. The list
+   *   of supported Storage policies can be retrieved
+   *   via {@link #getAllStoragePolicies}.
+   */
+  public void setStoragePolicy(final Path path, final String policyName)
+  throws IOException {
+throw new UnsupportedOperationException(getClass().getSimpleName()
++  doesn't support setStoragePolicy);
+  }
+
+  /**
+   * Retrieve all the storage policies supported by this file system.
+   *
+   * @return all storage policies supported by this filesystem.
+   * @throws IOException
+   */
+  public Collection? extends BlockStoragePolicySpi getAllStoragePolicies()
+  throws IOException {
+throw new UnsupportedOperationException(getClass().getSimpleName()
++  doesn't support getAllStoragePolicies);
+  }
+
   @Override //Object
   public int hashCode() {
 return myUri.hashCode();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2190bf1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
new file mode 100644
index 000..1d6502e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockStoragePolicySpi.java
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * 

hadoop git commit: HDFS-8405. Fix a typo in NamenodeFsck. Contributed by Takanobu Asanuma

2015-05-18 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/trunk a2190bf15 - 0c590e1c0


HDFS-8405. Fix a typo in NamenodeFsck.  Contributed by Takanobu Asanuma


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0c590e1c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0c590e1c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0c590e1c

Branch: refs/heads/trunk
Commit: 0c590e1c097462979f7ee054ad9121345d58655b
Parents: a2190bf
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue May 19 02:57:54 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Tue May 19 02:57:54 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  2 ++
 .../hadoop/hdfs/server/namenode/FsckServlet.java  |  2 +-
 .../hadoop/hdfs/server/namenode/NamenodeFsck.java | 18 +++---
 .../hadoop/hdfs/server/namenode/TestFsck.java |  8 
 4 files changed, 14 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c590e1c/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 4270a9c..7fd3495 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -868,6 +868,8 @@ Release 2.7.1 - UNRELEASED
 HDFS-6300. Prevent multiple balancers from running simultaneously
 (Rakesh R via vinayakumarb)
 
+HDFS-8405. Fix a typo in NamenodeFsck.  (Takanobu Asanuma via szetszwo)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c590e1c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
index 6fb3d21..5fae9cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
@@ -66,7 +66,7 @@ public class FsckServlet extends DfsServlet {
   namesystem.getNumberOfDatanodes(DatanodeReportType.LIVE); 
   new NamenodeFsck(conf, nn,
   bm.getDatanodeManager().getNetworkTopology(), pmap, out,
-  totalDatanodes, bm.minReplication, remoteAddress).fsck();
+  totalDatanodes, remoteAddress).fsck();
   
   return null;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c590e1c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 61f8fdb..44dba28 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -121,7 +121,6 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private final NameNode namenode;
   private final NetworkTopology networktopology;
   private final int totalDatanodes;
-  private final short minReplication;
   private final InetAddress remoteAddress;
 
   private String lostFound = null;
@@ -181,19 +180,17 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
* @param pmap key=value[] map passed to the http servlet as url parameters
* @param out output stream to write the fsck output
* @param totalDatanodes number of live datanodes
-   * @param minReplication minimum replication
* @param remoteAddress source address of the fsck request
*/
   NamenodeFsck(Configuration conf, NameNode namenode,
   NetworkTopology networktopology, 
   MapString,String[] pmap, PrintWriter out,
-  int totalDatanodes, short minReplication, InetAddress remoteAddress) {
+  int totalDatanodes, InetAddress remoteAddress) {
 this.conf = conf;
 this.namenode = namenode;
 this.networktopology = networktopology;
 this.out = out;
 this.totalDatanodes = totalDatanodes;
-this.minReplication = minReplication;
 this.remoteAddress = remoteAddress;
 this.bpPolicy = BlockPlacementPolicy.getInstance(conf, null,
 

hadoop git commit: HDFS-8405. Fix a typo in NamenodeFsck. Contributed by Takanobu Asanuma

2015-05-18 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9bcb74004 - 7bd4db968


HDFS-8405. Fix a typo in NamenodeFsck.  Contributed by Takanobu Asanuma


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7bd4db96
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7bd4db96
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7bd4db96

Branch: refs/heads/branch-2
Commit: 7bd4db968d3b1a803f392d63c776c6bb9fd0f935
Parents: 9bcb740
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue May 19 02:57:54 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Tue May 19 03:01:07 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   |  2 ++
 .../hadoop/hdfs/server/namenode/FsckServlet.java  |  2 +-
 .../hadoop/hdfs/server/namenode/NamenodeFsck.java | 18 +++---
 .../hadoop/hdfs/server/namenode/TestFsck.java |  8 
 4 files changed, 14 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7bd4db96/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 03cec56..56eb913 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -534,6 +534,8 @@ Release 2.7.1 - UNRELEASED
 HDFS-6300. Prevent multiple balancers from running simultaneously
 (Rakesh R via vinayakumarb)
 
+HDFS-8405. Fix a typo in NamenodeFsck.  (Takanobu Asanuma via szetszwo)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7bd4db96/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
index 6fb3d21..5fae9cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
@@ -66,7 +66,7 @@ public class FsckServlet extends DfsServlet {
   namesystem.getNumberOfDatanodes(DatanodeReportType.LIVE); 
   new NamenodeFsck(conf, nn,
   bm.getDatanodeManager().getNetworkTopology(), pmap, out,
-  totalDatanodes, bm.minReplication, remoteAddress).fsck();
+  totalDatanodes, remoteAddress).fsck();
   
   return null;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7bd4db96/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 493fd00..bf07bfb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -121,7 +121,6 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private final NameNode namenode;
   private final NetworkTopology networktopology;
   private final int totalDatanodes;
-  private final short minReplication;
   private final InetAddress remoteAddress;
 
   private String lostFound = null;
@@ -180,19 +179,17 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
* @param pmap key=value[] map passed to the http servlet as url parameters
* @param out output stream to write the fsck output
* @param totalDatanodes number of live datanodes
-   * @param minReplication minimum replication
* @param remoteAddress source address of the fsck request
*/
   NamenodeFsck(Configuration conf, NameNode namenode,
   NetworkTopology networktopology, 
   MapString,String[] pmap, PrintWriter out,
-  int totalDatanodes, short minReplication, InetAddress remoteAddress) {
+  int totalDatanodes, InetAddress remoteAddress) {
 this.conf = conf;
 this.namenode = namenode;
 this.networktopology = networktopology;
 this.out = out;
 this.totalDatanodes = totalDatanodes;
-this.minReplication = minReplication;
 this.remoteAddress = remoteAddress;
 this.bpPolicy = BlockPlacementPolicy.getInstance(conf, 

hadoop git commit: HDFS-8405. Fix a typo in NamenodeFsck. Contributed by Takanobu Asanuma

2015-05-18 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 411c09b61 - 59d1b4a32


HDFS-8405. Fix a typo in NamenodeFsck.  Contributed by Takanobu Asanuma


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/59d1b4a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/59d1b4a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/59d1b4a3

Branch: refs/heads/branch-2.7
Commit: 59d1b4a3232c31edb72d541f2081d9040671f306
Parents: 411c09b
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue May 19 02:57:54 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Tue May 19 03:17:51 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  2 ++
 .../hadoop/hdfs/server/namenode/FsckServlet.java |  2 +-
 .../hadoop/hdfs/server/namenode/NamenodeFsck.java| 15 ++-
 .../apache/hadoop/hdfs/server/namenode/TestFsck.java |  8 
 4 files changed, 13 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/59d1b4a3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 98a7cf5..ddab0e5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -82,6 +82,8 @@ Release 2.7.1 - UNRELEASED
 HDFS-6300. Prevent multiple balancers from running simultaneously
 (Rakesh R via vinayakumarb)
 
+HDFS-8405. Fix a typo in NamenodeFsck.  (Takanobu Asanuma via szetszwo)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/59d1b4a3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
index 6fb3d21..5fae9cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsckServlet.java
@@ -66,7 +66,7 @@ public class FsckServlet extends DfsServlet {
   namesystem.getNumberOfDatanodes(DatanodeReportType.LIVE); 
   new NamenodeFsck(conf, nn,
   bm.getDatanodeManager().getNetworkTopology(), pmap, out,
-  totalDatanodes, bm.minReplication, remoteAddress).fsck();
+  totalDatanodes, remoteAddress).fsck();
   
   return null;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/59d1b4a3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 7335eda..5074e41 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -118,7 +118,6 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
   private final NameNode namenode;
   private final NetworkTopology networktopology;
   private final int totalDatanodes;
-  private final short minReplication;
   private final InetAddress remoteAddress;
 
   private String lostFound = null;
@@ -175,19 +174,17 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
* @param pmap key=value[] map passed to the http servlet as url parameters
* @param out output stream to write the fsck output
* @param totalDatanodes number of live datanodes
-   * @param minReplication minimum replication
* @param remoteAddress source address of the fsck request
*/
   NamenodeFsck(Configuration conf, NameNode namenode,
   NetworkTopology networktopology, 
   MapString,String[] pmap, PrintWriter out,
-  int totalDatanodes, short minReplication, InetAddress remoteAddress) {
+  int totalDatanodes, InetAddress remoteAddress) {
 this.conf = conf;
 this.namenode = namenode;
 this.networktopology = networktopology;
 this.out = out;
 this.totalDatanodes = totalDatanodes;
-this.minReplication = minReplication;
 this.remoteAddress = remoteAddress;
 this.bpPolicy = 

hadoop git commit: HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)

2015-05-18 Thread raviprak
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0c590e1c0 - cdfae446a


HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cdfae446
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cdfae446
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cdfae446

Branch: refs/heads/trunk
Commit: cdfae446ad285db979a79bf55665363fd943702c
Parents: 0c590e1
Author: Ravi Prakash ravip...@altiscale.com
Authored: Mon May 18 12:37:21 2015 -0700
Committer: Ravi Prakash ravip...@altiscale.com
Committed: Mon May 18 12:37:21 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 +
 .../hdfs/server/namenode/FSNamesystem.java  | 17 ++
 .../hdfs/server/namenode/LeaseManager.java  |  9 +++
 .../namenode/metrics/TestNameNodeMetrics.java   | 59 ++--
 4 files changed, 83 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cdfae446/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7fd3495..35c3b5a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -570,6 +570,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-8345. Storage policy APIs must be exposed via the FileSystem
 interface. (Arpit Agarwal)
 
+HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cdfae446/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 0fec5ee..7e5b981 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -5347,6 +5347,23 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   }
 
   /**
+   * Get the number of files under construction in the system.
+   */
+  @Metric({ NumFilesUnderConstruction,
+  Number of files under construction })
+  public long getNumFilesUnderConstruction() {
+return leaseManager.countPath();
+  }
+
+  /**
+   * Get the total number of active clients holding lease in the system.
+   */
+  @Metric({ NumActiveClients, Number of active clients holding lease })
+  public long getNumActiveClients() {
+return leaseManager.countLease();
+  }
+
+  /**
* Get the total number of COMPLETE blocks in the system.
* For safe mode only complete blocks are counted.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cdfae446/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
index ade2312..0806f82 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
@@ -130,6 +130,15 @@ public class LeaseManager {
   @VisibleForTesting
   public synchronized int countLease() {return sortedLeases.size();}
 
+  /** @return the number of paths contained in all leases */
+  synchronized int countPath() {
+int count = 0;
+for (Lease lease : sortedLeases) {
+  count += lease.getFiles().size();
+}
+return count;
+  }
+
   /**
* Adds (or re-adds) the lease for the specified file.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cdfae446/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
index b390391..3120f85 100644
--- 

hadoop git commit: HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)

2015-05-18 Thread raviprak
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7bd4db968 - e5b805d36


HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)

(cherry picked from commit cdfae446ad285db979a79bf55665363fd943702c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e5b805d3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e5b805d3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e5b805d3

Branch: refs/heads/branch-2
Commit: e5b805d361dd1735c3ab615347e0bf5739759a07
Parents: 7bd4db9
Author: Ravi Prakash ravip...@altiscale.com
Authored: Mon May 18 12:37:21 2015 -0700
Committer: Ravi Prakash ravip...@altiscale.com
Committed: Mon May 18 12:38:32 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 +
 .../hdfs/server/namenode/FSNamesystem.java  | 17 ++
 .../hdfs/server/namenode/LeaseManager.java  |  9 +++
 .../namenode/metrics/TestNameNodeMetrics.java   | 59 ++--
 4 files changed, 83 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5b805d3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 56eb913..36c3fe0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -233,6 +233,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-8345. Storage policy APIs must be exposed via the FileSystem
 interface. (Arpit Agarwal)
 
+HDFS-4185. Add a metric for number of active leases (Rakesh R via raviprak)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5b805d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 13692a0..4974b92 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -5341,6 +5341,23 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   }
 
   /**
+   * Get the number of files under construction in the system.
+   */
+  @Metric({ NumFilesUnderConstruction,
+  Number of files under construction })
+  public long getNumFilesUnderConstruction() {
+return leaseManager.countPath();
+  }
+
+  /**
+   * Get the total number of active clients holding lease in the system.
+   */
+  @Metric({ NumActiveClients, Number of active clients holding lease })
+  public long getNumActiveClients() {
+return leaseManager.countLease();
+  }
+
+  /**
* Get the total number of COMPLETE blocks in the system.
* For safe mode only complete blocks are counted.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5b805d3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
index ade2312..0806f82 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
@@ -130,6 +130,15 @@ public class LeaseManager {
   @VisibleForTesting
   public synchronized int countLease() {return sortedLeases.size();}
 
+  /** @return the number of paths contained in all leases */
+  synchronized int countPath() {
+int count = 0;
+for (Lease lease : sortedLeases) {
+  count += lease.getFiles().size();
+}
+return count;
+  }
+
   /**
* Adds (or re-adds) the lease for the specified file.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5b805d3/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
 

hadoop git commit: HADOOP-1540. Support file exclusion list in distcp. Contributed by Rich Haase.

2015-05-18 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 76afd2886 - 0790275f0


HADOOP-1540. Support file exclusion list in distcp. Contributed by Rich Haase.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0790275f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0790275f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0790275f

Branch: refs/heads/trunk
Commit: 0790275f058b0cf41780ad337c9150a1e8ebebc6
Parents: 76afd28
Author: Jing Zhao ji...@apache.org
Authored: Mon May 18 13:24:35 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 13:24:35 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   2 +
 .../org/apache/hadoop/tools/CopyFilter.java |  60 +
 .../apache/hadoop/tools/DistCpConstants.java|   3 +-
 .../apache/hadoop/tools/DistCpOptionSwitch.java |  11 +-
 .../org/apache/hadoop/tools/DistCpOptions.java  |  30 ++-
 .../org/apache/hadoop/tools/OptionsParser.java  | 267 ---
 .../apache/hadoop/tools/RegexCopyFilter.java|  98 +++
 .../apache/hadoop/tools/SimpleCopyListing.java  |  23 +-
 .../org/apache/hadoop/tools/TrueCopyFilter.java |  33 +++
 .../org/apache/hadoop/tools/package-info.java   |  26 ++
 .../apache/hadoop/tools/TestCopyListing.java|  34 ---
 .../apache/hadoop/tools/TestIntegration.java|  49 
 .../apache/hadoop/tools/TestOptionsParser.java  |  17 +-
 .../hadoop/tools/TestRegexCopyFilter.java   | 113 
 .../apache/hadoop/tools/TestTrueCopyFilter.java |  36 +++
 15 files changed, 613 insertions(+), 189 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0790275f/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 324434b..cf09c5f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -583,6 +583,8 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11944. add option to test-patch to avoid relocating patch process
 directory (Sean Busbey via aw)
 
+HADOOP-1540. Support file exclusion list in distcp. (Rich Haase via jing9)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0790275f/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
new file mode 100644
index 000..3da364c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.tools;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Interface for excluding files from DistCp.
+ *
+ */
+public abstract class CopyFilter {
+
+  /**
+   * Default initialize method does nothing.
+   */
+  public void initialize() {}
+
+  /**
+   * Predicate to determine if a file can be excluded from copy.
+   *
+   * @param path a Path to be considered for copying
+   * @return boolean, true to copy, false to exclude
+   */
+  public abstract boolean shouldCopy(Path path);
+
+  /**
+   * Public factory method which returns the appropriate implementation of
+   * CopyFilter.
+   *
+   * @param conf DistCp configuratoin
+   * @return An instance of the appropriate CopyFilter
+   */
+  public static CopyFilter getCopyFilter(Configuration conf) {
+String filtersFilename = conf.get(DistCpConstants.CONF_LABEL_FILTERS_FILE);
+
+if (filtersFilename == null) {
+  return new TrueCopyFilter();
+} else {
+  String filterFilename = conf.get(
+  

hadoop git commit: YARN-3541. Add version info on timeline service / generic history web UI and REST API. Contributed by Zhijie Shen

2015-05-18 Thread xgong
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e5b805d36 - 3ceb2ffe5


YARN-3541. Add version info on timeline service / generic history web UI and 
REST API. Contributed by Zhijie Shen

(cherry picked from commit 76afd28862c1f27011273659a82cd45903a77170)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3ceb2ffe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3ceb2ffe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3ceb2ffe

Branch: refs/heads/branch-2
Commit: 3ceb2ffe549c1e9ea0cd29db0df806c673a2b1bf
Parents: e5b805d
Author: Xuan xg...@apache.org
Authored: Mon May 18 13:17:16 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Mon May 18 13:19:01 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../api/records/timeline/TimelineAbout.java | 116 +++
 .../yarn/util/timeline/TimelineUtils.java   |  14 +++
 .../webapp/AHSController.java   |   4 +
 .../webapp/AHSWebApp.java   |   1 +
 .../webapp/AHSWebServices.java  |  12 ++
 .../webapp/AboutBlock.java  |  47 
 .../webapp/AboutPage.java   |  36 ++
 .../webapp/NavBlock.java|   2 +
 .../timeline/webapp/TimelineWebServices.java|  41 +--
 .../webapp/TestAHSWebApp.java   |  14 +++
 .../webapp/TestAHSWebServices.java  |  31 +
 .../webapp/TestTimelineWebServices.java |  25 +++-
 .../src/site/markdown/TimelineServer.md | 101 +++-
 14 files changed, 404 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ceb2ffe/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 59a2742..7ac38ce 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -66,6 +66,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3505. Node's Log Aggregation Report with SUCCEED should not cached in 
 RMApps. (Xuan Gong via junping_du)
 
+YARN-3541. Add version info on timeline service / generic history web UI
+and REST API. (Zhijie Shen via xgong)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ceb2ffe/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
new file mode 100644
index 000..0a2625c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
@@ -0,0 +1,116 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records.timeline;
+
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+@XmlRootElement(name = about)
+@XmlAccessorType(XmlAccessType.NONE)
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public class TimelineAbout {
+
+  private String about;
+  private String timelineServiceVersion;
+  private String timelineServiceBuildVersion;
+  private String timelineServiceVersionBuiltOn;
+  private String hadoopVersion;
+  private String hadoopBuildVersion;
+  private String hadoopVersionBuiltOn;
+
+  public TimelineAbout() {
+  }
+
+  public 

hadoop git commit: YARN-3541. Add version info on timeline service / generic history web UI and REST API. Contributed by Zhijie Shen

2015-05-18 Thread xgong
Repository: hadoop
Updated Branches:
  refs/heads/trunk cdfae446a - 76afd2886


YARN-3541. Add version info on timeline service / generic history web UI and 
REST API. Contributed by Zhijie Shen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/76afd288
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/76afd288
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/76afd288

Branch: refs/heads/trunk
Commit: 76afd28862c1f27011273659a82cd45903a77170
Parents: cdfae44
Author: Xuan xg...@apache.org
Authored: Mon May 18 13:17:16 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Mon May 18 13:17:16 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../api/records/timeline/TimelineAbout.java | 116 +++
 .../yarn/util/timeline/TimelineUtils.java   |  14 +++
 .../webapp/AHSController.java   |   4 +
 .../webapp/AHSWebApp.java   |   1 +
 .../webapp/AHSWebServices.java  |  12 ++
 .../webapp/AboutBlock.java  |  47 
 .../webapp/AboutPage.java   |  36 ++
 .../webapp/NavBlock.java|   2 +
 .../timeline/webapp/TimelineWebServices.java|  41 +--
 .../webapp/TestAHSWebApp.java   |  14 +++
 .../webapp/TestAHSWebServices.java  |  31 +
 .../webapp/TestTimelineWebServices.java |  25 +++-
 .../src/site/markdown/TimelineServer.md | 101 +++-
 14 files changed, 404 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/76afd288/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 82174e7..c6f753d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -114,6 +114,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3505. Node's Log Aggregation Report with SUCCEED should not cached in 
 RMApps. (Xuan Gong via junping_du)
 
+YARN-3541. Add version info on timeline service / generic history web UI
+and REST API. (Zhijie Shen via xgong)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/76afd288/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
new file mode 100644
index 000..0a2625c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timeline/TimelineAbout.java
@@ -0,0 +1,116 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records.timeline;
+
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+@XmlRootElement(name = about)
+@XmlAccessorType(XmlAccessType.NONE)
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public class TimelineAbout {
+
+  private String about;
+  private String timelineServiceVersion;
+  private String timelineServiceBuildVersion;
+  private String timelineServiceVersionBuiltOn;
+  private String hadoopVersion;
+  private String hadoopBuildVersion;
+  private String hadoopVersionBuiltOn;
+
+  public TimelineAbout() {
+  }
+
+  public TimelineAbout(String about) {
+this.about = about;
+  }
+
+  @XmlElement(name = About)

hadoop git commit: HDFS-8417. Erasure Coding: Pread failed to read data starting from not-first stripe. Contributed by Walter Su.

2015-05-18 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 031b4d09d - a579a19c8


HDFS-8417. Erasure Coding: Pread failed to read data starting from not-first 
stripe. Contributed by Walter Su.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a579a19c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a579a19c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a579a19c

Branch: refs/heads/HDFS-7285
Commit: a579a19c84893d287bad4408618e96e695178e42
Parents: 031b4d0
Author: Jing Zhao ji...@apache.org
Authored: Mon May 18 15:08:30 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 15:08:30 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  3 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 72 
 .../server/datanode/SimulatedFSDataset.java |  2 +-
 4 files changed, 50 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a579a19c/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 333d85f..e016ba0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -217,3 +217,6 @@
 assigning new tasks. (umamahesh)
 
 HDFS-8367. BlockInfoStriped uses EC schema. (Kai Sasaki via Kai Zheng)
+
+HDFS-8417. Erasure Coding: Pread failed to read data starting from 
not-first stripe.
+(Walter Su via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a579a19c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
index c95f0b4..81c0c95 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
@@ -379,7 +379,8 @@ public class StripedBlockUtil {
 int firstCellIdxInBG = (int) (start / cellSize);
 int lastCellIdxInBG = (int) (end / cellSize);
 int firstCellSize = Math.min(cellSize - (int) (start % cellSize), len);
-long firstCellOffsetInBlk = start % cellSize;
+long firstCellOffsetInBlk = firstCellIdxInBG / dataBlkNum * cellSize +
+start % cellSize;
 int lastCellSize = lastCellIdxInBG == firstCellIdxInBG ?
 firstCellSize : (int) (end % cellSize) + 1;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a579a19c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
index 452cc2b..9032d09 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 import org.apache.hadoop.io.erasurecode.ECSchema;
 import org.apache.hadoop.io.erasurecode.rawcoder.RSRawDecoder;
 import org.junit.After;
+import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
@@ -115,40 +116,55 @@ public class TestDFSStripedInputStream {
 DFSTestUtil.createStripedFile(cluster, filePath, null, numBlocks,
 NUM_STRIPE_PER_BLOCK, false);
 LocatedBlocks lbs = fs.getClient().namenode.getBlockLocations(
-filePath.toString(), 0, BLOCK_GROUP_SIZE);
+filePath.toString(), 0, BLOCK_GROUP_SIZE * numBlocks);
+int fileLen = BLOCK_GROUP_SIZE * numBlocks;
 
-assert lbs.get(0) instanceof LocatedStripedBlock;
-LocatedStripedBlock bg = (LocatedStripedBlock)(lbs.get(0));
-for (int i = 0; i  DATA_BLK_NUM; i++) {
-  Block blk = new Block(bg.getBlock().getBlockId() + i,
-  NUM_STRIPE_PER_BLOCK * CELLSIZE,
-  bg.getBlock().getGenerationStamp());
-  blk.setGenerationStamp(bg.getBlock().getGenerationStamp());
-  cluster.injectBlocks(i, Arrays.asList(blk),
-  bg.getBlock().getBlockPoolId());
-}
-DFSStripedInputStream in =
-new 

hadoop git commit: HADOOP-1540. Support file exclusion list in distcp. Contributed by Rich Haase.

2015-05-18 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3ceb2ffe5 - 5caea4cd4


HADOOP-1540. Support file exclusion list in distcp. Contributed by Rich Haase.

(cherry picked from commit 0790275f058b0cf41780ad337c9150a1e8ebebc6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5caea4cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5caea4cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5caea4cd

Branch: refs/heads/branch-2
Commit: 5caea4cd46bb8f421f8db801e83a4be7709a9cc5
Parents: 3ceb2ff
Author: Jing Zhao ji...@apache.org
Authored: Mon May 18 13:24:35 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 13:26:09 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   2 +
 .../org/apache/hadoop/tools/CopyFilter.java |  60 +
 .../apache/hadoop/tools/DistCpConstants.java|   3 +-
 .../apache/hadoop/tools/DistCpOptionSwitch.java |  11 +-
 .../org/apache/hadoop/tools/DistCpOptions.java  |  30 ++-
 .../org/apache/hadoop/tools/OptionsParser.java  | 267 ---
 .../apache/hadoop/tools/RegexCopyFilter.java|  98 +++
 .../apache/hadoop/tools/SimpleCopyListing.java  |  23 +-
 .../org/apache/hadoop/tools/TrueCopyFilter.java |  33 +++
 .../org/apache/hadoop/tools/package-info.java   |  26 ++
 .../apache/hadoop/tools/TestCopyListing.java|  34 ---
 .../apache/hadoop/tools/TestIntegration.java|  49 
 .../apache/hadoop/tools/TestOptionsParser.java  |  17 +-
 .../hadoop/tools/TestRegexCopyFilter.java   | 113 
 .../apache/hadoop/tools/TestTrueCopyFilter.java |  36 +++
 15 files changed, 613 insertions(+), 189 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5caea4cd/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index ab7947f..3205a4a 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -112,6 +112,8 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11944. add option to test-patch to avoid relocating patch process
 directory (Sean Busbey via aw)
 
+HADOOP-1540. Support file exclusion list in distcp. (Rich Haase via jing9)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5caea4cd/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
new file mode 100644
index 000..3da364c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.tools;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Interface for excluding files from DistCp.
+ *
+ */
+public abstract class CopyFilter {
+
+  /**
+   * Default initialize method does nothing.
+   */
+  public void initialize() {}
+
+  /**
+   * Predicate to determine if a file can be excluded from copy.
+   *
+   * @param path a Path to be considered for copying
+   * @return boolean, true to copy, false to exclude
+   */
+  public abstract boolean shouldCopy(Path path);
+
+  /**
+   * Public factory method which returns the appropriate implementation of
+   * CopyFilter.
+   *
+   * @param conf DistCp configuratoin
+   * @return An instance of the appropriate CopyFilter
+   */
+  public static CopyFilter getCopyFilter(Configuration conf) {
+String filtersFilename = conf.get(DistCpConstants.CONF_LABEL_FILTERS_FILE);
+
+if (filtersFilename == null) {
+  return new TrueCopyFilter();
+} else {
+  

hadoop git commit: HDFS-8418. Fix the isNeededReplication calculation for Striped block in NN. Contributed by Yi Liu.

2015-05-18 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 a579a19c8 - b596edc42


HDFS-8418. Fix the isNeededReplication calculation for Striped block in NN. 
Contributed by Yi Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b596edc4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b596edc4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b596edc4

Branch: refs/heads/HDFS-7285
Commit: b596edc426a7ada75ffe99cab21db42b0774e292
Parents: a579a19
Author: Jing Zhao ji...@apache.org
Authored: Mon May 18 19:06:34 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 19:06:34 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 ++
 .../server/blockmanagement/BlockManager.java| 54 
 .../blockmanagement/DecommissionManager.java| 11 ++--
 .../hdfs/server/namenode/NamenodeFsck.java  |  2 +-
 4 files changed, 43 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b596edc4/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index e016ba0..1549930 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -220,3 +220,6 @@
 
 HDFS-8417. Erasure Coding: Pread failed to read data starting from 
not-first stripe.
 (Walter Su via jing9)
+
+HDFS-8418. Fix the isNeededReplication calculation for Striped block in NN.
+(Yi Liu via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b596edc4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 9cdfa05..2215b65 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -42,7 +42,6 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.HAUtil;
@@ -84,6 +83,7 @@ import 
org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks;
 import org.apache.hadoop.hdfs.util.LightWeightLinkedSet;
 import org.apache.hadoop.io.erasurecode.ECSchema;
 
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
 import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.getInternalBlockLength;
 
 import org.apache.hadoop.net.Node;
@@ -602,16 +602,7 @@ public class BlockManager {
 
   public short getMinStorageNum(BlockInfo block) {
 if (block.isStriped()) {
-  final BlockInfoStriped sblock = (BlockInfoStriped) block;
-  short dataBlockNum = sblock.getDataBlockNum();
-  if (sblock.isComplete() ||
-  sblock.getBlockUCState() == BlockUCState.COMMITTED) {
-// if the sblock is committed/completed and its length is less than a
-// full stripe, the minimum storage number needs to be adjusted
-dataBlockNum = (short) Math.min(dataBlockNum,
-(sblock.getNumBytes() - 1) / HdfsConstants.BLOCK_STRIPED_CELL_SIZE 
+ 1);
-  }
-  return dataBlockNum;
+  return getStripedDataBlockNum(block);
 } else {
   return minReplication;
 }
@@ -1256,7 +1247,7 @@ public class BlockManager {
   return;
 } 
 short expectedReplicas =
-b.stored.getBlockCollection().getPreferredBlockReplication();
+getExpectedReplicaNum(b.stored.getBlockCollection(), b.stored);
 
 // Add replica to the data-node if it is not already there
 if (storageInfo != null) {
@@ -1435,7 +1426,7 @@ public class BlockManager {
   continue;
 }
 
-requiredReplication = bc.getPreferredBlockReplication();
+requiredReplication = getExpectedReplicaNum(bc, block);
 
 // get a source data-node
 containingNodes = new ArrayList();
@@ -1535,7 +1526,7 @@ public class BlockManager {
 rw.targets = null;
 continue;
   }
-  

[08/50] hadoop git commit: HDFS-8216. TestDFSStripedOutputStream should use BlockReaderTestUtil to create BlockReader. Contributed by Tsz Wo Nicholas Sze.

2015-05-18 Thread jing9
HDFS-8216. TestDFSStripedOutputStream should use BlockReaderTestUtil to create 
BlockReader. Contributed by Tsz Wo Nicholas Sze.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f00da2d9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f00da2d9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f00da2d9

Branch: refs/heads/HDFS-7285
Commit: f00da2d964b867ef59ea51e7e3120c8fe9e0ac22
Parents: 1c69e45
Author: Zhe Zhang z...@apache.org
Authored: Tue Apr 21 20:56:39 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:05 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../apache/hadoop/hdfs/BlockReaderTestUtil.java |  7 +--
 .../hadoop/hdfs/TestBlockReaderFactory.java | 16 +++---
 .../hadoop/hdfs/TestDFSStripedOutputStream.java | 58 ++--
 4 files changed, 20 insertions(+), 64 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f00da2d9/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 8f28285..d8f2e9d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -107,3 +107,6 @@
 
 HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.
 (szetszwo)
+
+HDFS-8216. TestDFSStripedOutputStream should use BlockReaderTestUtil to 
+create BlockReader. (szetszwo via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f00da2d9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
index 88b7f37..829cf03 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
@@ -165,20 +165,19 @@ public class BlockReaderTestUtil {
*/
   public BlockReader getBlockReader(LocatedBlock testBlock, int offset, int 
lenToRead)
   throws IOException {
-return getBlockReader(cluster, testBlock, offset, lenToRead);
+return getBlockReader(cluster.getFileSystem(), testBlock, offset, 
lenToRead);
   }
 
   /**
* Get a BlockReader for the given block.
*/
-  public static BlockReader getBlockReader(MiniDFSCluster cluster,
-  LocatedBlock testBlock, int offset, int lenToRead) throws IOException {
+  public static BlockReader getBlockReader(final DistributedFileSystem fs,
+  LocatedBlock testBlock, int offset, long lenToRead) throws IOException {
 InetSocketAddress targetAddr = null;
 ExtendedBlock block = testBlock.getBlock();
 DatanodeInfo[] nodes = testBlock.getLocations();
 targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
 
-final DistributedFileSystem fs = cluster.getFileSystem();
 return new BlockReaderFactory(fs.getClient().getConf()).
   setInetSocketAddress(targetAddr).
   setBlock(block).

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f00da2d9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
index d8aceff..1a767c3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderFactory.java
@@ -250,8 +250,8 @@ public class TestBlockReaderFactory {
   LocatedBlock lblock = locatedBlocks.get(0); // first block
   BlockReader blockReader = null;
   try {
-blockReader = BlockReaderTestUtil.
-getBlockReader(cluster, lblock, 0, TEST_FILE_LEN);
+blockReader = BlockReaderTestUtil.getBlockReader(
+cluster.getFileSystem(), lblock, 0, TEST_FILE_LEN);
 Assert.fail(expected getBlockReader to fail the first time.);
   } catch (Throwable t) { 
 Assert.assertTrue(expected to see 'TCP reads were disabled  +
@@ -265,8 +265,8 @@ public class TestBlockReaderFactory {
 
   // Second time should succeed.
   

[34/50] hadoop git commit: HDFS-8289. Erasure Coding: add ECSchema to HdfsFileStatus. Contributed by Yong Zhang.

2015-05-18 Thread jing9
HDFS-8289. Erasure Coding: add ECSchema to HdfsFileStatus. Contributed by Yong 
Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/55763dfd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/55763dfd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/55763dfd

Branch: refs/heads/HDFS-7285
Commit: 55763dfd15af2fccc928d4a97f242a83f2c3f677
Parents: 59521d2
Author: Jing Zhao ji...@apache.org
Authored: Thu May 7 11:52:49 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:09 2015 -0700

--
 .../hadoop/hdfs/protocol/HdfsFileStatus.java| 10 ++-
 .../protocol/SnapshottableDirectoryStatus.java  |  2 +-
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  |  2 +-
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  6 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  2 +-
 .../hadoop/hdfs/DFSStripedInputStream.java  | 13 ++--
 .../hadoop/hdfs/DFSStripedOutputStream.java |  4 +-
 .../hdfs/protocol/HdfsLocatedFileStatus.java|  5 +-
 .../ClientNamenodeProtocolTranslatorPB.java |  2 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 10 ++-
 .../server/namenode/FSDirStatAndListingOp.java  | 16 +++--
 .../src/main/proto/erasurecoding.proto  | 19 --
 .../hadoop-hdfs/src/main/proto/hdfs.proto   | 22 +++
 .../hadoop/hdfs/TestDFSClientRetries.java   |  4 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 16 +++--
 .../apache/hadoop/hdfs/TestEncryptionZones.java |  2 +-
 .../hadoop/hdfs/TestFileStatusWithECschema.java | 65 
 .../java/org/apache/hadoop/hdfs/TestLease.java  |  4 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   |  2 +-
 .../apache/hadoop/hdfs/web/TestJsonUtil.java|  2 +-
 21 files changed, 149 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/55763dfd/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
index 34f429a..f07973a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.io.erasurecode.ECSchema;
 
 /** Interface that represents the over the wire information for a file.
  */
@@ -48,6 +49,8 @@ public class HdfsFileStatus {
 
   private final FileEncryptionInfo feInfo;
 
+  private final ECSchema schema;
+  
   // Used by dir, not including dot and dotdot. Always zero for a regular file.
   private final int childrenNum;
   private final byte storagePolicy;
@@ -73,7 +76,7 @@ public class HdfsFileStatus {
   long blocksize, long modification_time, long access_time,
   FsPermission permission, String owner, String group, byte[] symlink,
   byte[] path, long fileId, int childrenNum, FileEncryptionInfo feInfo,
-  byte storagePolicy) {
+  byte storagePolicy, ECSchema schema) {
 this.length = length;
 this.isdir = isdir;
 this.block_replication = (short)block_replication;
@@ -93,6 +96,7 @@ public class HdfsFileStatus {
 this.childrenNum = childrenNum;
 this.feInfo = feInfo;
 this.storagePolicy = storagePolicy;
+this.schema = schema;
   }
 
   /**
@@ -250,6 +254,10 @@ public class HdfsFileStatus {
 return feInfo;
   }
 
+  public ECSchema getECSchema() {
+return schema;
+  }
+
   public final int getChildrenNum() {
 return childrenNum;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/55763dfd/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
index ac19d44..813ea26 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
+++ 

[06/50] hadoop git commit: HDFS-8024. Erasure Coding: ECworker frame, basics, bootstraping and configuration. (Contributed by Uma Maheswara Rao G)

2015-05-18 Thread jing9
HDFS-8024. Erasure Coding: ECworker frame, basics, bootstraping and 
configuration. (Contributed by Uma Maheswara Rao G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/113f920e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/113f920e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/113f920e

Branch: refs/heads/HDFS-7285
Commit: 113f920ecf0c64f329cc702c48c12b82636595ee
Parents: cdd89c1
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Wed Apr 22 19:30:14 2015 +0530
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:05 2015 -0700

--
 .../erasurecode/coder/AbstractErasureCoder.java |  2 +-
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  7 ++
 .../hdfs/server/datanode/BPOfferService.java|  6 ++
 .../hadoop/hdfs/server/datanode/DataNode.java   | 10 +++
 .../erasurecode/ErasureCodingWorker.java| 83 
 .../src/main/proto/DatanodeProtocol.proto   |  2 +
 7 files changed, 112 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f920e/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
index e5bf11a..7403e35 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
@@ -66,7 +66,7 @@ public abstract class AbstractErasureCoder
* @param isEncoder
* @return raw coder
*/
-  protected static RawErasureCoder createRawCoder(Configuration conf,
+  public static RawErasureCoder createRawCoder(Configuration conf,
   String rawCoderFactoryKey, boolean isEncoder) {
 
 if (conf == null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f920e/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 3d86f05..1acde41 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -113,3 +113,6 @@
 
 HDFS-8212. DistributedFileSystem.createErasureCodingZone should pass schema
 in FileSystemLinkResolver. (szetszwo via Zhe Zhang)
+
+HDFS-8024. Erasure Coding: ECworker frame, basics, bootstraping and 
configuration.
+(umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f920e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index c127b5f..68cfe7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -973,6 +973,8 @@ public class PBHelper {
   return REG_CMD;
 case BlockIdCommand:
   return PBHelper.convert(proto.getBlkIdCmd());
+case BlockECRecoveryCommand:
+  return PBHelper.convert(proto.getBlkECRecoveryCmd());
 default:
   return null;
 }
@@ -1123,6 +1125,11 @@ public class PBHelper {
   builder.setCmdType(DatanodeCommandProto.Type.BlockIdCommand).
 setBlkIdCmd(PBHelper.convert((BlockIdCommand) datanodeCommand));
   break;
+case DatanodeProtocol.DNA_ERASURE_CODING_RECOVERY:
+  builder.setCmdType(DatanodeCommandProto.Type.BlockECRecoveryCommand)
+  .setBlkECRecoveryCmd(
+  convert((BlockECRecoveryCommand) datanodeCommand));
+  break;
 case DatanodeProtocol.DNA_UNKNOWN: //Not expected
 default:
   builder.setCmdType(DatanodeCommandProto.Type.NullDatanodeCommand);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/113f920e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
--
diff --git 

[03/50] hadoop git commit: HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.

2015-05-18 Thread jing9
HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1c69e45c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1c69e45c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1c69e45c

Branch: refs/heads/HDFS-7285
Commit: 1c69e45c51535d03bda66964cdce91cc831032a7
Parents: 70fe4de
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Mon Apr 20 17:42:02 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:05 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  61 ---
 .../hadoop/hdfs/TestDFSStripedOutputStream.java | 178 +++
 3 files changed, 100 insertions(+), 142 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c69e45c/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index c8dbf08..8f28285 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -104,3 +104,6 @@
 
 HDFS-8181. createErasureCodingZone sets retryCache state as false always
 (Uma Maheswara Rao G via vinayakumarb)
+
+HDFS-8190. StripedBlockUtil.getInternalBlockLength may have overflow error.
+(szetszwo)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c69e45c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
index 2368021..d622d4d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
@@ -25,6 +25,8 @@ import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
 
+import com.google.common.base.Preconditions;
+
 /**
  * Utility class for analyzing striped block groups
  */
@@ -81,46 +83,43 @@ public class StripedBlockUtil {
   /**
* Get the size of an internal block at the given index of a block group
*
-   * @param numBytesInGroup Size of the block group only counting data blocks
+   * @param dataSize Size of the block group only counting data blocks
* @param cellSize The size of a striping cell
-   * @param dataBlkNum The number of data blocks
-   * @param idxInGroup The logical index in the striped block group
+   * @param numDataBlocks The number of data blocks
+   * @param i The logical index in the striped block group
* @return The size of the internal block at the specified index
*/
-  public static long getInternalBlockLength(long numBytesInGroup,
-  int cellSize, int dataBlkNum, int idxInGroup) {
+  public static long getInternalBlockLength(long dataSize,
+  int cellSize, int numDataBlocks, int i) {
+Preconditions.checkArgument(dataSize = 0);
+Preconditions.checkArgument(cellSize  0);
+Preconditions.checkArgument(numDataBlocks  0);
+Preconditions.checkArgument(i = 0);
 // Size of each stripe (only counting data blocks)
-final long numBytesPerStripe = cellSize * dataBlkNum;
-assert numBytesPerStripe   0:
-getInternalBlockLength should only be called on valid striped blocks;
+final int stripeSize = cellSize * numDataBlocks;
 // If block group ends at stripe boundary, each internal block has an equal
 // share of the group
-if (numBytesInGroup % numBytesPerStripe == 0) {
-  return numBytesInGroup / dataBlkNum;
+final int lastStripeDataLen = (int)(dataSize % stripeSize);
+if (lastStripeDataLen == 0) {
+  return dataSize / numDataBlocks;
 }
 
-int numStripes = (int) ((numBytesInGroup - 1) / numBytesPerStripe + 1);
-assert numStripes = 1 : There should be at least 1 stripe;
-
-// All stripes but the last one are full stripes. The block should at least
-// contain (numStripes - 1) full cells.
-long blkSize = (numStripes - 1) * cellSize;
-
-long lastStripeLen = numBytesInGroup % numBytesPerStripe;
-// Size of parity cells should equal the size of the first cell, if it
-// is not full.
-long lastParityCellLen = Math.min(cellSize, lastStripeLen);
-
-if (idxInGroup = dataBlkNum) {
-  // for 

[29/50] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts in the branch when merging trunk changes (this commit is for HDFS-8327 and HDFS-8357). Contributed by Zhe Zhang.

2015-05-18 Thread jing9
HDFS-7936. Erasure coding: resolving conflicts in the branch when merging trunk 
changes (this commit is for HDFS-8327 and HDFS-8357). Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e2c1d180
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e2c1d180
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e2c1d180

Branch: refs/heads/HDFS-7285
Commit: e2c1d180611833060849ffe4c7ac5b1025724c2e
Parents: 11e5d11
Author: Zhe Zhang z...@apache.org
Authored: Mon May 11 12:22:12 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:09 2015 -0700

--
 .../hdfs/server/blockmanagement/BlockInfo.java  | 12 +--
 .../blockmanagement/BlockInfoContiguous.java| 38 
 .../server/blockmanagement/BlockManager.java|  4 +--
 .../erasurecode/ErasureCodingWorker.java|  3 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 10 ++
 .../server/namenode/TestStripedINodeFile.java   |  8 ++---
 .../namenode/TestTruncateQuotaUpdate.java   |  3 +-
 7 files changed, 23 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2c1d180/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index aebfbb1..61068b9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -88,13 +88,21 @@ public abstract class BlockInfo extends Block
   BlockInfo getPrevious(int index) {
 assert this.triplets != null : BlockInfo is not initialized;
 assert index = 0  index*3+1  triplets.length : Index is out of bound;
-return (BlockInfo) triplets[index*3+1];
+BlockInfo info = (BlockInfo)triplets[index*3+1];
+assert info == null ||
+info.getClass().getName().startsWith(BlockInfo.class.getName()) :
+BlockInfo is expected at  + index*3;
+return info;
   }
 
   BlockInfo getNext(int index) {
 assert this.triplets != null : BlockInfo is not initialized;
 assert index = 0  index*3+2  triplets.length : Index is out of bound;
-return (BlockInfo) triplets[index*3+2];
+BlockInfo info = (BlockInfo)triplets[index*3+2];
+assert info == null || info.getClass().getName().startsWith(
+BlockInfo.class.getName()) :
+BlockInfo is expected at  + index*3;
+return info;
   }
 
   void setStorageInfo(int index, DatanodeStorageInfo storage) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2c1d180/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
index d3051a3..eeab076 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
@@ -47,18 +47,6 @@ public class BlockInfoContiguous extends BlockInfo {
 this.setBlockCollection(from.getBlockCollection());
   }
 
-  public BlockCollection getBlockCollection() {
-return bc;
-  }
-
-  public void setBlockCollection(BlockCollection bc) {
-this.bc = bc;
-  }
-
-  public boolean isDeleted() {
-return (bc == null);
-  }
-
   public DatanodeDescriptor getDatanode(int index) {
 DatanodeStorageInfo storage = getStorageInfo(index);
 return storage == null ? null : storage.getDatanodeDescriptor();
@@ -70,32 +58,6 @@ public class BlockInfoContiguous extends BlockInfo {
 return (DatanodeStorageInfo)triplets[index*3];
   }
 
-  private BlockInfoContiguous getPrevious(int index) {
-assert this.triplets != null : BlockInfo is not initialized;
-assert index = 0  index*3+1  triplets.length : Index is out of bound;
-BlockInfoContiguous info = (BlockInfoContiguous)triplets[index*3+1];
-assert info == null || 
-
info.getClass().getName().startsWith(BlockInfoContiguous.class.getName()) :
-  BlockInfo is expected at  + index*3;
-return info;
-  }
-
-  BlockInfoContiguous 

[10/50] hadoop git commit: HDFS-8189. ClientProtocol#createErasureCodingZone API was wrongly annotated as Idempotent (Contributed by Vinayakumar B)

2015-05-18 Thread jing9
HDFS-8189. ClientProtocol#createErasureCodingZone API was wrongly annotated as 
Idempotent (Contributed by Vinayakumar B)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07516909
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07516909
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07516909

Branch: refs/heads/HDFS-7285
Commit: 07516909aa72d138e9ebe20a4c9dda9cc9ab6a13
Parents: c6bb2f2
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Apr 28 14:24:17 2015 +0530
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:06 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  5 -
 .../apache/hadoop/hdfs/protocol/ClientProtocol.java | 16 
 2 files changed, 12 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07516909/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index c28473b..6c5d7ce 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -136,4 +136,7 @@
 striped layout (Zhe Zhang)
 
 HDFS-8230. Erasure Coding: Ignore 
DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY 
-commands from standbynode if any (vinayakumarb)
\ No newline at end of file
+commands from standbynode if any (vinayakumarb)
+
+HDFS-8189. ClientProtocol#createErasureCodingZone API was wrongly annotated
+as Idempotent (vinayakumarb)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/07516909/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
index bba7697..76e2d12 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
@@ -1364,14 +1364,6 @@ public interface ClientProtocol {
   long prevId) throws IOException;
 
   /**
-   * Create an erasure coding zone with specified schema, if any, otherwise
-   * default
-   */
-  @Idempotent
-  public void createErasureCodingZone(String src, ECSchema schema)
-  throws IOException;
-
-  /**
* Set xattr of a file or directory.
* The name must be prefixed with the namespace followed by .. For example,
* user.attr.
@@ -1467,6 +1459,14 @@ public interface ClientProtocol {
   public EventBatchList getEditsFromTxid(long txid) throws IOException;
 
   /**
+   * Create an erasure coding zone with specified schema, if any, otherwise
+   * default
+   */
+  @AtMostOnce
+  public void createErasureCodingZone(String src, ECSchema schema)
+  throws IOException;
+
+  /**
* Gets the ECInfo for the specified file/directory
* 
* @param src



[20/50] hadoop git commit: HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil. Contributed by Zhe Zhang.

2015-05-18 Thread jing9
HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil. 
Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/40d5a85d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/40d5a85d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/40d5a85d

Branch: refs/heads/HDFS-7285
Commit: 40d5a85da0c664c1b5caa015544a618445744c8b
Parents: bed8e8c
Author: Zhe Zhang z...@apache.org
Authored: Wed Apr 29 23:49:52 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:07 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/DFSStripedInputStream.java  | 111 +---
 .../hadoop/hdfs/util/StripedBlockUtil.java  | 174 +++
 .../hadoop/hdfs/TestPlanReadPortions.java   |  11 +-
 4 files changed, 186 insertions(+), 113 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/40d5a85d/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 6a9bdee..ca60487 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -146,3 +146,6 @@
 
 HDFS-8272. Erasure Coding: simplify the retry logic in 
DFSStripedInputStream 
 (stateful read). (Jing Zhao via Zhe Zhang)
+
+HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil.
+(Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/40d5a85d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 3da7306..0dc98fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -17,12 +17,14 @@
  */
 package org.apache.hadoop.hdfs;
 
-import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.*;
 import 
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.ReadPortion;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions;
+
 import org.apache.hadoop.net.NetUtils;
 import org.apache.htrace.Span;
 import org.apache.htrace.Trace;
@@ -31,8 +33,6 @@ import org.apache.htrace.TraceScope;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
-import java.util.ArrayList;
-import java.util.List;
 import java.util.Set;
 import java.util.Map;
 import java.util.HashMap;
@@ -69,59 +69,6 @@ import java.util.concurrent.Future;
  *   3. pread with decode support: TODO: will be supported after HDFS-7678
  */
 public class DFSStripedInputStream extends DFSInputStream {
-  /**
-   * This method plans the read portion from each block in the stripe
-   * @param dataBlkNum The number of data blocks in the striping group
-   * @param cellSize The size of each striping cell
-   * @param startInBlk Starting offset in the striped block
-   * @param len Length of the read request
-   * @param bufOffset  Initial offset in the result buffer
-   * @return array of {@link ReadPortion}, each representing the portion of I/O
-   * for an individual block in the group
-   */
-  @VisibleForTesting
-  static ReadPortion[] planReadPortions(final int dataBlkNum,
-  final int cellSize, final long startInBlk, final int len, int bufOffset) 
{
-ReadPortion[] results = new ReadPortion[dataBlkNum];
-for (int i = 0; i  dataBlkNum; i++) {
-  results[i] = new ReadPortion();
-}
-
-// cellIdxInBlk is the index of the cell in the block
-// E.g., cell_3 is the 2nd cell in blk_0
-int cellIdxInBlk = (int) (startInBlk / (cellSize * dataBlkNum));
-
-// blkIdxInGroup is the index of the block in the striped block group
-// E.g., blk_2 is the 3rd block in the group
-final int blkIdxInGroup = (int) (startInBlk / cellSize % dataBlkNum);
-results[blkIdxInGroup].startOffsetInBlock = cellSize * cellIdxInBlk +
-startInBlk % cellSize;
-boolean 

[18/50] hadoop git commit: HDFS-8316. Erasure coding: refactor EC constants to be consistent with HDFS-8249. Contributed by Zhe Zhang.

2015-05-18 Thread jing9
HDFS-8316. Erasure coding: refactor EC constants to be consistent with 
HDFS-8249. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f8a39c94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f8a39c94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f8a39c94

Branch: refs/heads/HDFS-7285
Commit: f8a39c94517725f5a235a5cca8374b387efe8272
Parents: 4a9cc36
Author: Jing Zhao ji...@apache.org
Authored: Mon May 4 11:24:35 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:07 2015 -0700

--
 .../org/apache/hadoop/hdfs/protocol/HdfsConstants.java   | 11 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt |  3 +++
 .../org/apache/hadoop/hdfs/DFSStripedOutputStream.java   |  2 +-
 .../hdfs/server/blockmanagement/BlockIdManager.java  |  4 ++--
 .../blockmanagement/SequentialBlockGroupIdGenerator.java |  4 ++--
 .../hadoop/hdfs/server/common/HdfsServerConstants.java   |  5 -
 .../hdfs/server/namenode/TestAddStripedBlocks.java   |  4 ++--
 .../hdfs/server/namenode/TestStripedINodeFile.java   |  6 +++---
 8 files changed, 28 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f8a39c94/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 58c7ea1..32ca81c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -75,6 +75,17 @@ public final class HdfsConstants {
   public static final String CLIENT_NAMENODE_PROTOCOL_NAME =
   org.apache.hadoop.hdfs.protocol.ClientProtocol;
 
+  /*
+   * These values correspond to the values used by the system default erasure
+   * coding schema.
+   * TODO: to be removed once all places use schema.
+   */
+
+  public static final byte NUM_DATA_BLOCKS = 6;
+  public static final byte NUM_PARITY_BLOCKS = 3;
+  // The chunk size for striped block which is used by erasure coding
+  public static final int BLOCK_STRIPED_CELL_SIZE = 256 * 1024;
+
   // SafeMode actions
   public enum SafeModeAction {
 SAFEMODE_LEAVE, SAFEMODE_ENTER, SAFEMODE_GET

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f8a39c94/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 145494f..e30b2ed 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -158,3 +158,6 @@
 
 HDFS-7949. WebImageViewer need support file size calculation with striped 
 blocks. (Rakesh R via Zhe Zhang)
+
+HDFS-8316. Erasure coding: refactor EC constants to be consistent with 
HDFS-8249.
+(Zhe Zhang via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f8a39c94/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index 5e2a534..71cdbb9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -419,7 +419,7 @@ public class DFSStripedOutputStream extends DFSOutputStream 
{
   @Override
   protected synchronized void closeImpl() throws IOException {
 if (isClosed()) {
-  getLeadingStreamer().getLastException().check();
+  getLeadingStreamer().getLastException().check(true);
   return;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f8a39c94/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 

[19/50] hadoop git commit: HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of datastreamer threads. Contributed by Rakesh R.

2015-05-18 Thread jing9
HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of 
datastreamer threads. Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/63de26f9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/63de26f9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/63de26f9

Branch: refs/heads/HDFS-7285
Commit: 63de26f925f748d0866335a36721fd4e8bb8776e
Parents: 40d5a85
Author: Zhe Zhang z...@apache.org
Authored: Thu Apr 30 00:13:32 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:07 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +++
 .../org/apache/hadoop/hdfs/DFSStripedOutputStream.java  | 12 ++--
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/63de26f9/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index ca60487..3c75152 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -149,3 +149,6 @@
 
 HDFS-8282. Erasure coding: move striped reading logic to StripedBlockUtil.
 (Zhe Zhang)
+
+HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of 
+datastreamer threads. (Rakesh R via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63de26f9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index c930187..5e2a534 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -331,18 +331,26 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
   // interrupt datastreamer if force is true
   @Override
   protected void closeThreads(boolean force) throws IOException {
+int index = 0;
+boolean exceptionOccurred = false;
 for (StripedDataStreamer streamer : streamers) {
   try {
 streamer.close(force);
 streamer.join();
 streamer.closeSocket();
-  } catch (InterruptedException e) {
-throw new IOException(Failed to shutdown streamer);
+  } catch (InterruptedException | IOException e) {
+DFSClient.LOG.error(Failed to shutdown streamer: name=
++ streamer.getName() + , index= + index + , file= + src, e);
+exceptionOccurred = true;
   } finally {
 streamer.setSocketToNull();
 setClosed();
+index++;
   }
 }
+if (exceptionOccurred) {
+  throw new IOException(Failed to shutdown streamer);
+}
   }
 
   /**



[12/50] hadoop git commit: HDFS-8223. Should calculate checksum for parity blocks in DFSStripedOutputStream. Contributed by Yi Liu.

2015-05-18 Thread jing9
HDFS-8223. Should calculate checksum for parity blocks in 
DFSStripedOutputStream. Contributed by Yi Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb8dd8a2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb8dd8a2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb8dd8a2

Branch: refs/heads/HDFS-7285
Commit: cb8dd8a2879dd1804625327fc7222e30ece9b968
Parents: 55e2657
Author: Jing Zhao ji...@apache.org
Authored: Thu Apr 23 15:48:21 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:06 2015 -0700

--
 .../main/java/org/apache/hadoop/fs/FSOutputSummer.java|  4 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  |  3 +++
 .../org/apache/hadoop/hdfs/DFSStripedOutputStream.java| 10 ++
 3 files changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb8dd8a2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
index bdc5585..a8a7494 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java
@@ -196,6 +196,10 @@ abstract public class FSOutputSummer extends OutputStream {
 return sum.getChecksumSize();
   }
 
+  protected DataChecksum getDataChecksum() {
+return sum;
+  }
+
   protected TraceScope createWriteTraceScope() {
 return NullScope.INSTANCE;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb8dd8a2/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 48791b1..9357e23 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -125,3 +125,6 @@
 
 HDFS-8233. Fix DFSStripedOutputStream#getCurrentBlockGroupBytes when the 
last
 stripe is at the block group boundary. (jing9)
+
+HDFS-8223. Should calculate checksum for parity blocks in 
DFSStripedOutputStream.
+(Yi Liu via jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb8dd8a2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index 245dfc1..6842267 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -62,6 +62,8 @@ public class DFSStripedOutputStream extends DFSOutputStream {
*/
   private final ECInfo ecInfo;
   private final int cellSize;
+  // checksum buffer, we only need to calculate checksum for parity blocks
+  private byte[] checksumBuf;
   private ByteBuffer[] cellBuffers;
 
   private final short numAllBlocks;
@@ -99,6 +101,7 @@ public class DFSStripedOutputStream extends DFSOutputStream {
 
 checkConfiguration();
 
+checksumBuf = new byte[getChecksumSize() * (cellSize / bytesPerChecksum)];
 cellBuffers = new ByteBuffer[numAllBlocks];
 ListBlockingQueueLocatedBlock stripeBlocks = new ArrayList();
 
@@ -179,6 +182,10 @@ public class DFSStripedOutputStream extends 
DFSOutputStream {
   private ListDFSPacket generatePackets(ByteBuffer byteBuffer)
   throws IOException{
 ListDFSPacket packets = new ArrayList();
+assert byteBuffer.hasArray();
+getDataChecksum().calculateChunkedSums(byteBuffer.array(), 0,
+byteBuffer.remaining(), checksumBuf, 0);
+int ckOff = 0;
 while (byteBuffer.remaining()  0) {
   DFSPacket p = createPacket(packetSize, chunksPerPacket,
   streamer.getBytesCurBlock(),
@@ -186,6 +193,9 @@ public class DFSStripedOutputStream extends DFSOutputStream 
{
   int maxBytesToPacket = p.getMaxChunks() * bytesPerChecksum;
   int toWrite = byteBuffer.remaining()  maxBytesToPacket ?
   maxBytesToPacket: byteBuffer.remaining();
+  int ckLen = ((toWrite - 1) / bytesPerChecksum + 1) * getChecksumSize();
+  p.writeChecksum(checksumBuf, ckOff, ckLen);
+  ckOff += ckLen;
   p.writeData(byteBuffer, 

[16/50] hadoop git commit: HDFS-8308. Erasure Coding: NameNode may get blocked in waitForLoadingFSImage() when loading editlog. Contributed by Jing Zhao.

2015-05-18 Thread jing9
HDFS-8308. Erasure Coding: NameNode may get blocked in waitForLoadingFSImage() 
when loading editlog. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/77e1ad78
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/77e1ad78
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/77e1ad78

Branch: refs/heads/HDFS-7285
Commit: 77e1ad7817417576ec03edc5dfa4a94f1877fc7b
Parents: 63de26f
Author: Jing Zhao ji...@apache.org
Authored: Thu Apr 30 19:42:29 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:07 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +
 .../namenode/ErasureCodingZoneManager.java  |  3 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  4 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java | 12 
 .../hadoop/hdfs/TestErasureCodingZones.java |  6 +-
 .../server/namenode/TestAddStripedBlocks.java   | 61 ++--
 6 files changed, 52 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/77e1ad78/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 3c75152..596bbcf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -152,3 +152,6 @@
 
 HDFS-8183. Erasure Coding: Improve DFSStripedOutputStream closing of 
 datastreamer threads. (Rakesh R via Zhe Zhang)
+
+HDFS-8308. Erasure Coding: NameNode may get blocked in 
waitForLoadingFSImage()
+when loading editlog. (jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/77e1ad78/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
index 8cda289..14d4e29 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingZoneManager.java
@@ -79,7 +79,8 @@ public class ErasureCodingZoneManager {
   for (XAttr xAttr : xAttrs) {
 if (XATTR_ERASURECODING_ZONE.equals(XAttrHelper.getPrefixName(xAttr))) 
{
   String schemaName = new String(xAttr.getValue());
-  ECSchema schema = dir.getFSNamesystem().getECSchema(schemaName);
+  ECSchema schema = dir.getFSNamesystem().getSchemaManager()
+  .getSchema(schemaName);
   return new ECZoneInfo(inode.getFullPathName(), schema);
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/77e1ad78/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 13eee0d..075fc6c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -7729,9 +7729,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 
   /**
* Create an erasure coding zone on directory src.
-   * @param schema  ECSchema for the erasure coding zone
-   * @param src the path of a directory which will be the root of the
+   * @param srcArg  the path of a directory which will be the root of the
*erasure coding zone. The directory must be empty.
+   * @param schema  ECSchema for the erasure coding zone
*
* @throws AccessControlException  if the caller is not the superuser.
* @throws UnresolvedLinkException if the path can't be resolved.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/77e1ad78/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
index 

[37/50] hadoop git commit: HDFS-8195. Erasure coding: Fix file quota change when we complete/commit the striped blocks. Contributed by Takuya Fukudome.

2015-05-18 Thread jing9
HDFS-8195. Erasure coding: Fix file quota change when we complete/commit the 
striped blocks. Contributed by Takuya Fukudome.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fc724984
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fc724984
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fc724984

Branch: refs/heads/HDFS-7285
Commit: fc7249844d0bbab22695cd0f819b07695069fe2a
Parents: a0b6de3
Author: Zhe Zhang z...@apache.org
Authored: Tue May 12 23:10:25 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:10 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hdfs/server/namenode/FSDirectory.java   |   2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  25 +++-
 .../namenode/TestQuotaWithStripedBlocks.java| 125 +++
 4 files changed, 151 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc724984/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 0a2bb9e..0945d72 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -206,3 +206,6 @@
 handled properly (Rakesh R via zhz)
 
 HDFS-8363. Erasure Coding: DFSStripedInputStream#seekToNewSource. (yliu)
+
+HDFS-8195. Erasure coding: Fix file quota change when we complete/commit 
+the striped blocks. (Takuya Fukudome via zhz)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc724984/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 3f619ff..f879fb9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -619,7 +619,7 @@ public class FSDirectory implements Closeable {
 final INodeFile fileINode = iip.getLastINode().asFile();
 EnumCountersStorageType typeSpaceDeltas =
   getStorageTypeDeltas(fileINode.getStoragePolicyID(), ssDelta,
-  replication, replication);;
+  replication, replication);
 updateCount(iip, iip.length() - 1,
   new QuotaCounts.Builder().nameSpace(nsDelta).storageSpace(ssDelta * 
replication).
   typeSpaces(typeSpaceDeltas).build(),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc724984/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 4d4a56f..7d60a61 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -3856,11 +3856,30 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 }
 
 // Adjust disk space consumption if required
-// TODO: support EC files
-final long diff = fileINode.getPreferredBlockSize() - 
commitBlock.getNumBytes();
+final long diff;
+final short replicationFactor;
+if (fileINode.isStriped()) {
+  final ECSchema ecSchema = dir.getECSchema(iip);
+  final short numDataUnits = (short) ecSchema.getNumDataUnits();
+  final short numParityUnits = (short) ecSchema.getNumParityUnits();
+
+  final long numBlocks = numDataUnits + numParityUnits;
+  final long fullBlockGroupSize =
+  fileINode.getPreferredBlockSize() * numBlocks;
+
+  final BlockInfoStriped striped = new BlockInfoStriped(commitBlock,
+  numDataUnits, numParityUnits);
+  final long actualBlockGroupSize = striped.spaceConsumed();
+
+  diff = fullBlockGroupSize - actualBlockGroupSize;
+  replicationFactor = (short) 1;
+} else {
+  diff = fileINode.getPreferredBlockSize() - commitBlock.getNumBytes();
+  replicationFactor = fileINode.getFileReplication();
+}
 if (diff  0) {
   try {
-dir.updateSpaceConsumed(iip, 0, -diff, 

[30/50] hadoop git commit: HDFS-8334. Erasure coding: rename DFSStripedInputStream related test classes. Contributed by Zhe Zhang.

2015-05-18 Thread jing9
HDFS-8334. Erasure coding: rename DFSStripedInputStream related test classes. 
Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ab2b0fb1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ab2b0fb1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ab2b0fb1

Branch: refs/heads/HDFS-7285
Commit: ab2b0fb10f21189091ac722b262c9fea45c4450f
Parents: 1efb8ed
Author: Zhe Zhang z...@apache.org
Authored: Wed May 6 15:34:37 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:09 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   5 +
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 365 ---
 .../apache/hadoop/hdfs/TestReadStripedFile.java | 218 ---
 .../hadoop/hdfs/TestWriteReadStripedFile.java   | 261 +
 4 files changed, 427 insertions(+), 422 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab2b0fb1/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 0d2d448..8729f8a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -178,3 +178,8 @@
 
 HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. 
 (Yi Liu via Zhe Zhang)
+
+HADOOP-11921. Enhance tests for erasure coders. (Kai Zheng)
+
+HDFS-8334. Erasure coding: rename DFSStripedInputStream related test 
+classes. (Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab2b0fb1/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
index 11cdf7b..a1f704d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
@@ -17,245 +17,202 @@
  */
 package org.apache.hadoop.hdfs;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FileStatus;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ECInfo;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
-
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.BeforeClass;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset;
+import org.apache.hadoop.hdfs.server.namenode.ECSchemaManager;
+import org.apache.hadoop.hdfs.util.StripedBlockUtil;
+import org.junit.After;
+import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.Arrays;
+import java.util.List;
 
 public class TestDFSStripedInputStream {
-  private static int dataBlocks = HdfsConstants.NUM_DATA_BLOCKS;
-  private static int parityBlocks = HdfsConstants.NUM_PARITY_BLOCKS;
-
-
-  private static DistributedFileSystem fs;
-  private final static int cellSize = HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
-  private final static int stripesPerBlock = 4;
-  static int blockSize = cellSize * stripesPerBlock;
-  static int numDNs = dataBlocks + parityBlocks + 2;
-
-  private static MiniDFSCluster cluster;
 
-  @BeforeClass
-  public static void setup() throws IOException {
-Configuration conf = new Configuration();
-conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
-cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDNs).build();
-cluster.getFileSystem().getClient().createErasureCodingZone(/, null);
+  public static final Log LOG = 
LogFactory.getLog(TestDFSStripedInputStream.class);
+
+  private MiniDFSCluster cluster;
+  private Configuration conf = new Configuration();
+  

[17/50] hadoop git commit: HDFS-7949. WebImageViewer need support file size calculation with striped blocks. Contributed by Rakesh R.

2015-05-18 Thread jing9
HDFS-7949. WebImageViewer need support file size calculation with striped 
blocks. Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4a9cc368
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4a9cc368
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4a9cc368

Branch: refs/heads/HDFS-7285
Commit: 4a9cc3685322c101c43b2421a3a9aef618462199
Parents: 77e1ad7
Author: Zhe Zhang z...@apache.org
Authored: Fri May 1 15:59:58 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:07 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../blockmanagement/BlockInfoStriped.java   |  27 +--
 .../tools/offlineImageViewer/FSImageLoader.java |  21 ++-
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  22 +++
 ...TestOfflineImageViewerWithStripedBlocks.java | 166 +++
 5 files changed, 212 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a9cc368/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 596bbcf..145494f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -155,3 +155,6 @@
 
 HDFS-8308. Erasure Coding: NameNode may get blocked in 
waitForLoadingFSImage()
 when loading editlog. (jing9)
+
+HDFS-7949. WebImageViewer need support file size calculation with striped 
+blocks. (Rakesh R via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a9cc368/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index 23e3153..f0e52e3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -19,9 +19,7 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
-
-import java.io.DataOutput;
-import java.io.IOException;
+import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 
 import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
 
@@ -203,28 +201,9 @@ public class BlockInfoStriped extends BlockInfo {
 // In case striped blocks, total usage by this striped blocks should
 // be the total of data blocks and parity blocks because
 // `getNumBytes` is the total of actual data block size.
-
-// 0. Calculate the total bytes per stripes Num Bytes per Stripes
-long numBytesPerStripe = dataBlockNum * BLOCK_STRIPED_CELL_SIZE;
-if (getNumBytes() % numBytesPerStripe == 0) {
-  return getNumBytes() / dataBlockNum * getTotalBlockNum();
+return StripedBlockUtil.spaceConsumedByStripedBlock(getNumBytes(),
+dataBlockNum, parityBlockNum, BLOCK_STRIPED_CELL_SIZE);
 }
-// 1. Calculate the number of stripes in this block group. Num Stripes
-long numStripes = (getNumBytes() - 1) / numBytesPerStripe + 1;
-// 2. Calculate the parity cell length in the last stripe. Note that the
-//size of parity cells should equal the size of the first cell, if it
-//is not full. Last Stripe Parity Cell Length
-long lastStripeParityCellLen = Math.min(getNumBytes() % numBytesPerStripe,
-BLOCK_STRIPED_CELL_SIZE);
-// 3. Total consumed space is the total of
-// - The total of the full cells of data blocks and parity blocks.
-// - The remaining of data block which does not make a stripe.
-// - The last parity block cells. These size should be same
-//   to the first cell in this stripe.
-return getTotalBlockNum() * (BLOCK_STRIPED_CELL_SIZE * (numStripes - 1))
-+ getNumBytes() % numBytesPerStripe
-+ lastStripeParityCellLen * parityBlockNum;
-  }
 
   @Override
   public final boolean isStriped() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a9cc368/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
--
diff --git 

[27/50] hadoop git commit: HDFS-8137. Send the EC schema to DataNode via EC encoding/recovering command. Contributed by Uma Maheswara Rao G

2015-05-18 Thread jing9
HDFS-8137. Send the EC schema to DataNode via EC encoding/recovering command. 
Contributed by Uma Maheswara Rao G


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6fea26eb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6fea26eb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6fea26eb

Branch: refs/heads/HDFS-7285
Commit: 6fea26eb9e6d23bbc442ffd9f8d118f63975ec45
Parents: 28a46b4
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Tue May 5 11:22:52 2015 +0530
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:08 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  2 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  6 ++-
 .../server/blockmanagement/BlockManager.java| 22 +-
 .../blockmanagement/DatanodeDescriptor.java | 16 
 .../hdfs/server/namenode/FSNamesystem.java  | 43 +++-
 .../hadoop/hdfs/server/namenode/Namesystem.java | 14 ++-
 .../server/protocol/BlockECRecoveryCommand.java | 14 ++-
 .../src/main/proto/erasurecoding.proto  |  1 +
 .../hadoop/hdfs/protocolPB/TestPBHelper.java| 21 --
 9 files changed, 102 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fea26eb/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 77272e7..faec023 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -164,3 +164,5 @@
 
 HDFS-8281. Erasure Coding: implement parallel stateful reading for striped 
layout.
 (jing9)
+
+HDFS-8137. Send the EC schema to DataNode via EC encoding/recovering 
command(umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fea26eb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 3cd3e03..e230232 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -3191,8 +3191,10 @@ public class PBHelper {
   liveBlkIndices[i] = liveBlockIndicesList.get(i).shortValue();
 }
 
+ECSchema ecSchema = 
convertECSchema(blockEcRecoveryInfoProto.getEcSchema());
+
 return new BlockECRecoveryInfo(block, sourceDnInfos, targetDnInfos,
-targetStorageUuids, convertStorageTypes, liveBlkIndices);
+targetStorageUuids, convertStorageTypes, liveBlkIndices, ecSchema);
   }
 
   public static BlockECRecoveryInfoProto convertBlockECRecoveryInfo(
@@ -3217,6 +3219,8 @@ public class PBHelper {
 short[] liveBlockIndices = blockEcRecoveryInfo.getLiveBlockIndices();
 builder.addAllLiveBlockIndices(convertIntArray(liveBlockIndices));
 
+builder.setEcSchema(convertECSchema(blockEcRecoveryInfo.getECSchema()));
+
 return builder.build();
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fea26eb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 1e50348..b55c654 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -65,7 +65,6 @@ import 
org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
 import org.apache.hadoop.hdfs.server.blockmanagement.CorruptReplicasMap.Reason;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.AddBlockResult;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.PendingDataNodeMessages.ReportedBlockInfo;
-import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
@@ -83,7 +82,10 @@ import 

[44/50] hadoop git commit: HADOOP-11921. Enhance tests for erasure coders. Contributed by Kai Zheng.

2015-05-18 Thread jing9
HADOOP-11921. Enhance tests for erasure coders. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2390eb6d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2390eb6d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2390eb6d

Branch: refs/heads/HDFS-7285
Commit: 2390eb6dae9fc463bb7955078a92c07e05cf31d7
Parents: 5cac91f
Author: Zhe Zhang z...@apache.org
Authored: Mon May 18 10:06:56 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:11 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  2 +
 .../hadoop/io/erasurecode/TestCoderBase.java| 50 ++-
 .../erasurecode/coder/TestErasureCoderBase.java | 89 +++-
 .../erasurecode/coder/TestRSErasureCoder.java   | 64 ++
 .../io/erasurecode/coder/TestXORCoder.java  | 24 --
 .../io/erasurecode/rawcoder/TestRSRawCoder.java | 76 +
 .../rawcoder/TestRSRawCoderBase.java| 51 +++
 .../erasurecode/rawcoder/TestRawCoderBase.java  | 45 +-
 .../erasurecode/rawcoder/TestXORRawCoder.java   | 24 --
 9 files changed, 274 insertions(+), 151 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2390eb6d/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 9749270..c10ffbd 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -44,3 +44,5 @@
 HADOOP-11818. Minor improvements for erasurecode classes. (Rakesh R via 
Kai Zheng)
 
 HADOOP-11841. Remove unused ecschema-def.xml files.  (szetszwo)
+
+HADOOP-11921. Enhance tests for erasure coders. (Kai Zheng via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2390eb6d/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
index 22fd98d..be1924c 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
@@ -49,15 +49,15 @@ public abstract class TestCoderBase {
* Prepare before running the case.
* @param numDataUnits
* @param numParityUnits
-   * @param erasedIndexes
+   * @param erasedDataIndexes
*/
   protected void prepare(Configuration conf, int numDataUnits,
- int numParityUnits, int[] erasedIndexes) {
+ int numParityUnits, int[] erasedDataIndexes) {
 this.conf = conf;
 this.numDataUnits = numDataUnits;
 this.numParityUnits = numParityUnits;
-this.erasedDataIndexes = erasedIndexes != null ?
-erasedIndexes : new int[] {0};
+this.erasedDataIndexes = erasedDataIndexes != null ?
+erasedDataIndexes : new int[] {0};
   }
 
   /**
@@ -82,15 +82,19 @@ public abstract class TestCoderBase {
   }
 
   /**
-   * Adjust and return erased indexes based on the array of the input chunks (
-   * parity chunks + data chunks).
-   * @return
+   * Adjust and return erased indexes altogether, including erased data indexes
+   * and parity indexes.
+   * @return erased indexes altogether
*/
   protected int[] getErasedIndexesForDecoding() {
 int[] erasedIndexesForDecoding = new int[erasedDataIndexes.length];
+
+int idx = 0;
+
 for (int i = 0; i  erasedDataIndexes.length; i++) {
-  erasedIndexesForDecoding[i] = erasedDataIndexes[i] + numParityUnits;
+  erasedIndexesForDecoding[idx ++] = erasedDataIndexes[i] + numParityUnits;
 }
+
 return erasedIndexesForDecoding;
   }
 
@@ -116,30 +120,23 @@ public abstract class TestCoderBase {
   }
 
   /**
-   * Have a copy of the data chunks that's to be erased thereafter. The copy
-   * will be used to compare and verify with the to be recovered chunks.
+   * Erase chunks to test the recovering of them. Before erasure clone them
+   * first so could return them.
* @param dataChunks
-   * @return
+   * @return clone of erased chunks
*/
-  protected ECChunk[] copyDataChunksToErase(ECChunk[] dataChunks) {
-ECChunk[] copiedChunks = new ECChunk[erasedDataIndexes.length];
-
-int j = 0;
-for (int i = 0; i  erasedDataIndexes.length; i++) {
-  copiedChunks[j 

[25/50] hadoop git commit: HDFS-8281. Erasure Coding: implement parallel stateful reading for striped layout. Contributed by Jing Zhao.

2015-05-18 Thread jing9
HDFS-8281. Erasure Coding: implement parallel stateful reading for striped 
layout. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/28a46b48
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/28a46b48
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/28a46b48

Branch: refs/heads/HDFS-7285
Commit: 28a46b48279aae1201d960f26dc659d37407e24b
Parents: f8a39c9
Author: Jing Zhao ji...@apache.org
Authored: Mon May 4 14:44:58 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:08 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  26 +++
 .../hadoop/hdfs/DFSStripedInputStream.java  | 217 +--
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  34 ++-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  50 -
 .../hadoop/hdfs/TestPlanReadPortions.java   |   4 +-
 6 files changed, 246 insertions(+), 88 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/28a46b48/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index e30b2ed..77272e7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -161,3 +161,6 @@
 
 HDFS-8316. Erasure coding: refactor EC constants to be consistent with 
HDFS-8249.
 (Zhe Zhang via jing9)
+
+HDFS-8281. Erasure Coding: implement parallel stateful reading for striped 
layout.
+(jing9)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/28a46b48/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index bef4da0..ca799fa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -716,6 +716,16 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   interface ReaderStrategy {
 public int doRead(BlockReader blockReader, int off, int len)
 throws ChecksumException, IOException;
+
+/**
+ * Copy data from the src ByteBuffer into the read buffer.
+ * @param src The src buffer where the data is copied from
+ * @param offset Useful only when the ReadStrategy is based on a byte 
array.
+ *   Indicate the offset of the byte array for copy.
+ * @param length Useful only when the ReadStrategy is based on a byte 
array.
+ *   Indicate the length of the data to copy.
+ */
+public int copyFrom(ByteBuffer src, int offset, int length);
   }
 
   protected void updateReadStatistics(ReadStatistics readStatistics,
@@ -749,6 +759,13 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   updateReadStatistics(readStatistics, nRead, blockReader);
   return nRead;
 }
+
+@Override
+public int copyFrom(ByteBuffer src, int offset, int length) {
+  ByteBuffer writeSlice = src.duplicate();
+  writeSlice.get(buf, offset, length);
+  return length;
+}
   }
 
   /**
@@ -782,6 +799,15 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
 }
   } 
 }
+
+@Override
+public int copyFrom(ByteBuffer src, int offset, int length) {
+  ByteBuffer writeSlice = src.duplicate();
+  int remaining = Math.min(buf.remaining(), writeSlice.remaining());
+  writeSlice.limit(writeSlice.position() + remaining);
+  buf.put(writeSlice);
+  return remaining;
+}
   }
 
   /* This is a used by regular read() and handles ChecksumExceptions.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/28a46b48/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 0dc98fd..13c4743 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -17,6 +17,7 @@
  */
 package 

[28/50] hadoop git commit: HDFS-8324. Add trace info to DFSClient#getErasureCodingZoneInfo(..). Contributed by Vinayakumar B

2015-05-18 Thread jing9
HDFS-8324. Add trace info to DFSClient#getErasureCodingZoneInfo(..). 
Contributed by Vinayakumar B


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27dc8fcf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27dc8fcf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27dc8fcf

Branch: refs/heads/HDFS-7285
Commit: 27dc8fcfd49b5d53a985682a953a5aa4659e952c
Parents: f76d0d6
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Tue May 5 19:25:21 2015 +0530
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:08 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 3 +++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27dc8fcf/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index ef760fc..a8df3f2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -169,3 +169,6 @@
 
 HDFS-8242. Erasure Coding: XML based end-to-end test for ECCli commands
 (Rakesh R via vinayakumarb)
+
+HDFS-8324. Add trace info to DFSClient#getErasureCodingZoneInfo(..) 
(vinayakumarb via 
+umamahesh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27dc8fcf/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 5fb23a0..63c27ef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -3351,11 +3351,14 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
*/
   public ECZoneInfo getErasureCodingZoneInfo(String src) throws IOException {
 checkOpen();
+TraceScope scope = getPathTraceScope(getErasureCodingZoneInfo, src);
 try {
   return namenode.getErasureCodingZoneInfo(src);
 } catch (RemoteException re) {
   throw re.unwrapRemoteException(FileNotFoundException.class,
   AccessControlException.class, UnresolvedPathException.class);
+} finally {
+  scope.close();
 }
   }
 }



[11/50] hadoop git commit: HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may cause block id conflicts. Contributed by Jing Zhao.

2015-05-18 Thread jing9
HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may cause 
block id conflicts. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/851b1145
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/851b1145
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/851b1145

Branch: refs/heads/HDFS-7285
Commit: 851b1145ba9c7ba2b24787d26152f373381ee565
Parents: cb8dd8a
Author: Zhe Zhang z...@apache.org
Authored: Fri Apr 24 09:30:38 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:06 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 ++
 .../SequentialBlockGroupIdGenerator.java| 39 +++---
 .../SequentialBlockIdGenerator.java |  2 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 57 +++-
 .../server/namenode/TestAddStripedBlocks.java   | 21 
 5 files changed, 77 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/851b1145/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 9357e23..cf41a9b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -128,3 +128,6 @@
 
 HDFS-8223. Should calculate checksum for parity blocks in 
DFSStripedOutputStream.
 (Yi Liu via jing9)
+
+HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may 
cause 
+block id conflicts (Jing Zhao via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/851b1145/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
index e9e22ee..de8e379 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
@@ -19,9 +19,11 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.util.SequentialNumber;
 
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_GROUP_INDEX_MASK;
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.MAX_BLOCKS_IN_GROUP;
+
 /**
  * Generate the next valid block group ID by incrementing the maximum block
  * group ID allocated so far, with the first 2^10 block group IDs reserved.
@@ -34,6 +36,9 @@ import org.apache.hadoop.util.SequentialNumber;
  * bits (n+2) to (64-m) represent the ID of its block group, while the last m
  * bits represent its index of the group. The value m is determined by the
  * maximum number of blocks in a group (MAX_BLOCKS_IN_GROUP).
+ *
+ * Note that the {@link #nextValue()} methods requires external lock to
+ * guarantee IDs have no conflicts.
  */
 @InterfaceAudience.Private
 public class SequentialBlockGroupIdGenerator extends SequentialNumber {
@@ -47,32 +52,30 @@ public class SequentialBlockGroupIdGenerator extends 
SequentialNumber {
 
   @Override // NumberGenerator
   public long nextValue() {
-// Skip to next legitimate block group ID based on the naming protocol
-while (super.getCurrentValue() % HdfsConstants.MAX_BLOCKS_IN_GROUP  0) {
-  super.nextValue();
-}
+skipTo((getCurrentValue()  ~BLOCK_GROUP_INDEX_MASK) + 
MAX_BLOCKS_IN_GROUP);
 // Make sure there's no conflict with existing random block IDs
-while (hasValidBlockInRange(super.getCurrentValue())) {
-  super.skipTo(super.getCurrentValue() +
-  HdfsConstants.MAX_BLOCKS_IN_GROUP);
+final Block b = new Block(getCurrentValue());
+while (hasValidBlockInRange(b)) {
+  skipTo(getCurrentValue() + MAX_BLOCKS_IN_GROUP);
+  b.setBlockId(getCurrentValue());
 }
-if (super.getCurrentValue() = 0) {
-  BlockManager.LOG.warn(All negative block group IDs are used,  +
-  growing into positive IDs,  +
-  which might conflict with non-erasure coded blocks.);
+if (b.getBlockId() = 0) {
+  throw new IllegalStateException(All 

[41/50] hadoop git commit: HDFS-7678. Erasure coding: DFSInputStream with decode functionality (pread). Contributed by Zhe Zhang.

2015-05-18 Thread jing9
HDFS-7678. Erasure coding: DFSInputStream with decode functionality (pread). 
Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a265c211
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a265c211
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a265c211

Branch: refs/heads/HDFS-7285
Commit: a265c2111b0b6d6241478fb14f6909422f6cb5c6
Parents: e2c1d18
Author: Zhe Zhang z...@apache.org
Authored: Mon May 11 21:10:23 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:10 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../hadoop/hdfs/DFSStripedInputStream.java  | 164 --
 .../erasurecode/ErasureCodingWorker.java|  10 +-
 .../hadoop/hdfs/util/StripedBlockUtil.java  | 517 +--
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  97 +++-
 .../hadoop/hdfs/TestWriteReadStripedFile.java   |  49 ++
 6 files changed, 768 insertions(+), 72 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a265c211/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index c7d01c7..0acf746 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -195,3 +195,6 @@
 
 HDFS-8355. Erasure Coding: Refactor BlockInfo and 
BlockInfoUnderConstruction.
 (Tsz Wo Nicholas Sze via jing9)
+
+HDFS-7678. Erasure coding: DFSInputStream with decode functionality 
(pread).
+(Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a265c211/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index 7425e75..7678fae 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -21,15 +21,27 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.fs.ReadOption;
 import org.apache.hadoop.fs.StorageType;
-import org.apache.hadoop.hdfs.protocol.*;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
 import 
org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
 import org.apache.hadoop.hdfs.util.StripedBlockUtil;
 import org.apache.hadoop.io.ByteBufferPool;
 
-import static org.apache.hadoop.hdfs.util.StripedBlockUtil.ReadPortion;
 import static org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions;
+import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.divideByteRangeIntoStripes;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.initDecodeInputs;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.decodeAndFillBuffer;
+import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.getNextCompletedStripedRead;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.ReadPortion;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.AlignedStripe;
+import static org.apache.hadoop.hdfs.util.StripedBlockUtil.StripingChunk;
+import static 
org.apache.hadoop.hdfs.util.StripedBlockUtil.StripingChunkReadResult;
 
 import org.apache.hadoop.io.erasurecode.ECSchema;
+
 import org.apache.hadoop.net.NetUtils;
 import org.apache.htrace.Span;
 import org.apache.htrace.Trace;
@@ -37,10 +49,12 @@ import org.apache.htrace.TraceScope;
 
 import java.io.EOFException;
 import java.io.IOException;
+import java.io.InterruptedIOException;
 import java.net.InetSocketAddress;
 import java.nio.ByteBuffer;
 import java.util.EnumSet;
 import java.util.Set;
+import java.util.Collection;
 import java.util.Map;
 import java.util.HashMap;
 import java.util.concurrent.CompletionService;
@@ -51,7 +65,6 @@ import java.util.concurrent.CancellationException;
 import java.util.concurrent.Callable;
 import java.util.concurrent.Future;
 
-
 /**
  * DFSStripedInputStream reads from striped block groups, illustrated below:
  *
@@ -125,6 +138,7 @@ public class 

[24/50] hadoop git commit: HDFS-7672. Handle write failure for stripping blocks and refactor the existing code in DFSStripedOutputStream and StripedDataStreamer.

2015-05-18 Thread jing9
HDFS-7672. Handle write failure for stripping blocks and refactor the existing 
code in DFSStripedOutputStream and StripedDataStreamer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dcfe0955
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dcfe0955
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dcfe0955

Branch: refs/heads/HDFS-7285
Commit: dcfe0955d3e568d2d0c58a7acf3eb87821768f9e
Parents: 27dc8fc
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue May 5 16:26:49 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:08 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  69 +--
 .../hadoop/hdfs/DFSStripedOutputStream.java | 501 ---
 .../java/org/apache/hadoop/hdfs/DFSUtil.java|  10 +-
 .../org/apache/hadoop/hdfs/DataStreamer.java|  15 +-
 .../apache/hadoop/hdfs/StripedDataStreamer.java | 156 ++
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |   2 -
 .../hadoop/hdfs/TestDFSStripedOutputStream.java |  18 +-
 .../TestDFSStripedOutputStreamWithFailure.java  | 323 
 9 files changed, 764 insertions(+), 333 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dcfe0955/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index a8df3f2..7efaa5a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -172,3 +172,6 @@
 
 HDFS-8324. Add trace info to DFSClient#getErasureCodingZoneInfo(..) 
(vinayakumarb via 
 umamahesh)
+
+HDFS-7672. Handle write failure for stripping blocks and refactor the
+existing code in DFSStripedOutputStream and StripedDataStreamer.  
(szetszwo)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dcfe0955/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 0280d71..8580357 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -24,6 +24,8 @@ import java.nio.channels.ClosedChannelException;
 import java.util.EnumSet;
 import java.util.concurrent.atomic.AtomicReference;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.crypto.CryptoProtocolVersion;
@@ -86,6 +88,8 @@ import com.google.common.base.Preconditions;
 @InterfaceAudience.Private
 public class DFSOutputStream extends FSOutputSummer
 implements Syncable, CanSetDropBehind {
+  static final Log LOG = LogFactory.getLog(DFSOutputStream.class);
+
   /**
* Number of times to retry creating a file when there are transient 
* errors (typically related to encryption zones and KeyProvider operations).
@@ -419,24 +423,35 @@ public class DFSOutputStream extends FSOutputSummer
 streamer.incBytesCurBlock(len);
 
 // If packet is full, enqueue it for transmission
-//
 if (currentPacket.getNumChunks() == currentPacket.getMaxChunks() ||
 streamer.getBytesCurBlock() == blockSize) {
-  if (DFSClient.LOG.isDebugEnabled()) {
-DFSClient.LOG.debug(DFSClient writeChunk packet full seqno= +
-currentPacket.getSeqno() +
-, src= + src +
-, bytesCurBlock= + streamer.getBytesCurBlock() +
-, blockSize= + blockSize +
-, appendChunk= + streamer.getAppendChunk());
-  }
-  streamer.waitAndQueuePacket(currentPacket);
-  currentPacket = null;
+  enqueueCurrentPacketFull();
+}
+  }
 
-  adjustChunkBoundary();
+  void enqueueCurrentPacket() throws IOException {
+streamer.waitAndQueuePacket(currentPacket);
+currentPacket = null;
+  }
 
-  endBlock();
+  void enqueueCurrentPacketFull() throws IOException {
+if (LOG.isDebugEnabled()) {
+  LOG.debug(enqueue full  + currentPacket + , src= + src
+  + , bytesCurBlock= + streamer.getBytesCurBlock()
+  + , blockSize= + blockSize
+  + , appendChunk= + streamer.getAppendChunk()
+  + ,  + streamer);
 }
+

[14/50] hadoop git commit: HDFS-8230. Erasure Coding: Ignore DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY commands from standbynode if any (Contributed by Vinayakumar B)

2015-05-18 Thread jing9
HDFS-8230. Erasure Coding: Ignore DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY 
commands from standbynode if any (Contributed by Vinayakumar B)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c6bb2f21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c6bb2f21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c6bb2f21

Branch: refs/heads/HDFS-7285
Commit: c6bb2f21624441bd5b8661f0a09e9e855c9cae5d
Parents: 3f37fd1
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Apr 28 14:14:33 2015 +0530
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:06 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt  | 3 +++
 .../org/apache/hadoop/hdfs/server/datanode/BPOfferService.java| 1 +
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6bb2f21/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index e8db485..c28473b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -134,3 +134,6 @@
 
 HDFS-8033. Erasure coding: stateful (non-positional) read from files in 
 striped layout (Zhe Zhang)
+
+HDFS-8230. Erasure Coding: Ignore 
DatanodeProtocol#DNA_ERASURE_CODING_RECOVERY 
+commands from standbynode if any (vinayakumarb)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c6bb2f21/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
index 69baac7..6606d0b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
@@ -757,6 +757,7 @@ class BPOfferService {
 case DatanodeProtocol.DNA_BALANCERBANDWIDTHUPDATE:
 case DatanodeProtocol.DNA_CACHE:
 case DatanodeProtocol.DNA_UNCACHE:
+case DatanodeProtocol.DNA_ERASURE_CODING_RECOVERY:
   LOG.warn(Got a command from standby NN - ignoring command: + 
cmd.getAction());
   break;
 default:



[35/50] hadoop git commit: HDFS-8368. Erasure Coding: DFS opening a non-existent file need to be handled properly. Contributed by Rakesh R.

2015-05-18 Thread jing9
HDFS-8368. Erasure Coding: DFS opening a non-existent file need to be handled 
properly. Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1ff3ad19
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1ff3ad19
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1ff3ad19

Branch: refs/heads/HDFS-7285
Commit: 1ff3ad195f28520db06d64b57e7cddeb684a7110
Parents: 809dfc4
Author: Zhe Zhang z...@apache.org
Authored: Tue May 12 14:31:28 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:10 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java | 12 +++-
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ff3ad19/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index f026a5c..79ad208 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -201,3 +201,6 @@
 
 HDFS-8372. Erasure coding: compute storage type quotas for striped files,
 to be consistent with HDFS-8327. (Zhe Zhang via jing9)
+
+HDFS-8368. Erasure Coding: DFS opening a non-existent file need to be 
+handled properly (Rakesh R via zhz)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ff3ad19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 12c4a4b..cde1fc8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1191,12 +1191,14 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 //Get block info from namenode
 TraceScope scope = getPathTraceScope(newDFSInputStream, src);
 try {
-  ECSchema schema = getFileInfo(src).getECSchema();
-  if (schema != null) {
-return new DFSStripedInputStream(this, src, verifyChecksum, schema);
-  } else {
-return new DFSInputStream(this, src, verifyChecksum);
+  HdfsFileStatus fileInfo = getFileInfo(src);
+  if (fileInfo != null) {
+ECSchema schema = fileInfo.getECSchema();
+if (schema != null) {
+  return new DFSStripedInputStream(this, src, verifyChecksum, schema);
+}
   }
+  return new DFSInputStream(this, src, verifyChecksum);
 } finally {
   scope.close();
 }



[47/50] hadoop git commit: HADOOP-11920. Refactor some codes for erasure coders. Contributed by Kai Zheng.

2015-05-18 Thread jing9
HADOOP-11920. Refactor some codes for erasure coders. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9be56c20
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9be56c20
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9be56c20

Branch: refs/heads/HDFS-7285
Commit: 9be56c20a25d0e6954337459b7a3c95cc2ba46ad
Parents: 2390eb6
Author: Zhe Zhang z...@apache.org
Authored: Mon May 18 10:09:57 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:11 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  2 +
 .../hadoop/fs/CommonConfigurationKeys.java  |  4 --
 .../apache/hadoop/io/erasurecode/ECChunk.java   |  2 +-
 .../erasurecode/coder/AbstractErasureCoder.java |  6 +-
 .../io/erasurecode/coder/RSErasureDecoder.java  | 40 +
 .../rawcoder/AbstractRawErasureCoder.java   | 63 +++-
 .../rawcoder/AbstractRawErasureDecoder.java | 54 ++---
 .../rawcoder/AbstractRawErasureEncoder.java | 52 +++-
 .../erasurecode/rawcoder/RawErasureCoder.java   |  8 +--
 .../erasurecode/rawcoder/RawErasureDecoder.java | 24 +---
 .../io/erasurecode/rawcoder/XORRawDecoder.java  | 24 ++--
 .../io/erasurecode/rawcoder/XORRawEncoder.java  |  6 +-
 .../hadoop/io/erasurecode/TestCoderBase.java|  4 +-
 .../erasurecode/coder/TestRSErasureCoder.java   |  6 +-
 14 files changed, 156 insertions(+), 139 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9be56c20/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index c10ffbd..a152e31 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -46,3 +46,5 @@
 HADOOP-11841. Remove unused ecschema-def.xml files.  (szetszwo)
 
 HADOOP-11921. Enhance tests for erasure coders. (Kai Zheng via Zhe Zhang)
+
+HADOOP-11920. Refactor some codes for erasure coders. (Kai Zheng via Zhe 
Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9be56c20/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index bd2a24b..3f2871b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -143,10 +143,6 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   /** Supported erasure codec classes */
   public static final String IO_ERASURECODE_CODECS_KEY = 
io.erasurecode.codecs;
 
-  /** Use XOR raw coder when possible for the RS codec */
-  public static final String IO_ERASURECODE_CODEC_RS_USEXOR_KEY =
-  io.erasurecode.codec.rs.usexor;
-
   /** Raw coder factory for the RS codec */
   public static final String IO_ERASURECODE_CODEC_RS_RAWCODER_KEY =
   io.erasurecode.codec.rs.rawcoder;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9be56c20/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index 01e8f35..436e13e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -71,7 +71,7 @@ public class ECChunk {
* @param chunks
* @return an array of byte array
*/
-  public static byte[][] toArray(ECChunk[] chunks) {
+  public static byte[][] toArrays(ECChunk[] chunks) {
 byte[][] bytesArr = new byte[chunks.length][];
 
 ByteBuffer buffer;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9be56c20/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 

[42/50] hadoop git commit: HDFS-8352. Erasure Coding: test webhdfs read write stripe file. (waltersu4549)

2015-05-18 Thread jing9
HDFS-8352. Erasure Coding: test webhdfs read write stripe file. (waltersu4549)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5cac91ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5cac91ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5cac91ff

Branch: refs/heads/HDFS-7285
Commit: 5cac91fff1c2ae7cfd1210627da253eab925e200
Parents: 6b596d6
Author: waltersu4549 waltersu4...@apache.org
Authored: Mon May 18 19:10:37 2015 +0800
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:11 2015 -0700

--
 .../hadoop/hdfs/TestWriteReadStripedFile.java   | 267 ++-
 1 file changed, 148 insertions(+), 119 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5cac91ff/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
index 57d6eb9..f78fb7a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
@@ -21,9 +21,13 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.web.ByteRangeInputStream;
+import org.apache.hadoop.hdfs.web.WebHdfsConstants;
+import org.apache.hadoop.hdfs.web.WebHdfsTestUtil;
 import org.apache.hadoop.io.erasurecode.rawcoder.RSRawDecoder;
 import org.junit.AfterClass;
 import org.junit.Assert;
@@ -33,23 +37,26 @@ import org.junit.Test;
 import java.io.EOFException;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.Random;
 
 public class TestWriteReadStripedFile {
   private static int dataBlocks = HdfsConstants.NUM_DATA_BLOCKS;
   private static int parityBlocks = HdfsConstants.NUM_PARITY_BLOCKS;
 
-
-  private static DistributedFileSystem fs;
   private final static int cellSize = HdfsConstants.BLOCK_STRIPED_CELL_SIZE;
   private final static int stripesPerBlock = 4;
   static int blockSize = cellSize * stripesPerBlock;
   static int numDNs = dataBlocks + parityBlocks + 2;
 
   private static MiniDFSCluster cluster;
+  private static Configuration conf;
+  private static FileSystem fs;
+
+  private static Random r= new Random();
 
   @BeforeClass
   public static void setup() throws IOException {
-Configuration conf = new Configuration();
+conf = new Configuration();
 conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDNs).build();
 cluster.getFileSystem().getClient().createErasureCodingZone(/, null);
@@ -134,7 +141,7 @@ public class TestWriteReadStripedFile {
   @Test
   public void testFileMoreThanABlockGroup2() throws IOException {
 testOneFileUsingDFSStripedInputStream(/MoreThanABlockGroup2,
-blockSize * dataBlocks + cellSize+ 123);
+blockSize * dataBlocks + cellSize + 123);
   }
 
 
@@ -171,7 +178,7 @@ public class TestWriteReadStripedFile {
   }
 
   private void assertSeekAndRead(FSDataInputStream fsdis, int pos,
-  int writeBytes) throws IOException {
+ int writeBytes) throws IOException {
 fsdis.seek(pos);
 byte[] buf = new byte[writeBytes];
 int readLen = readAll(fsdis, buf);
@@ -182,147 +189,169 @@ public class TestWriteReadStripedFile {
 }
   }
 
-  private void testOneFileUsingDFSStripedInputStream(String src, int 
writeBytes)
+  private void testOneFileUsingDFSStripedInputStream(String src, int 
fileLength)
   throws IOException {
-Path testPath = new Path(src);
-final byte[] bytes = generateBytes(writeBytes);
-DFSTestUtil.writeFile(fs, testPath, new String(bytes));
 
-//check file length
-FileStatus status = fs.getFileStatus(testPath);
-long fileLength = status.getLen();
+final byte[] expected = generateBytes(fileLength);
+Path srcPath = new Path(src);
+DFSTestUtil.writeFile(fs, srcPath, new String(expected));
+
+verifyLength(fs, srcPath, fileLength);
+
+byte[] smallBuf = new byte[1024];
+byte[] largeBuf = new byte[fileLength + 100];
+verifyPread(fs, srcPath, fileLength, expected, largeBuf);
+
+verifyStatefulRead(fs, srcPath, fileLength, expected, 

[15/50] hadoop git commit: HDFS-8033. Erasure coding: stateful (non-positional) read from files in striped layout. Contributed by Zhe Zhang.

2015-05-18 Thread jing9
HDFS-8033. Erasure coding: stateful (non-positional) read from files in striped 
layout. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3f37fd1f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3f37fd1f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3f37fd1f

Branch: refs/heads/HDFS-7285
Commit: 3f37fd1fde12396269fc8510e1d8171c1c7f9211
Parents: 851b114
Author: Zhe Zhang z...@apache.org
Authored: Fri Apr 24 22:36:15 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:06 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  55 ++--
 .../hadoop/hdfs/DFSStripedInputStream.java  | 311 ++-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  43 +++
 .../apache/hadoop/hdfs/TestReadStripedFile.java | 110 ++-
 5 files changed, 465 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f37fd1f/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index cf41a9b..e8db485 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -131,3 +131,6 @@
 
 HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may 
cause 
 block id conflicts (Jing Zhao via Zhe Zhang)
+
+HDFS-8033. Erasure coding: stateful (non-positional) read from files in 
+striped layout (Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f37fd1f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index 16250dd..6eb25d0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -95,34 +95,34 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   public static boolean tcpReadsDisabledForTesting = false;
   private long hedgedReadOpsLoopNumForTesting = 0;
   protected final DFSClient dfsClient;
-  private AtomicBoolean closed = new AtomicBoolean(false);
-  private final String src;
-  private final boolean verifyChecksum;
+  protected AtomicBoolean closed = new AtomicBoolean(false);
+  protected final String src;
+  protected final boolean verifyChecksum;
 
   // state by stateful read only:
   // (protected by lock on this)
   /
   private DatanodeInfo currentNode = null;
-  private LocatedBlock currentLocatedBlock = null;
-  private long pos = 0;
-  private long blockEnd = -1;
+  protected LocatedBlock currentLocatedBlock = null;
+  protected long pos = 0;
+  protected long blockEnd = -1;
   private BlockReader blockReader = null;
   
 
   // state shared by stateful and positional read:
   // (protected by lock on infoLock)
   
-  private LocatedBlocks locatedBlocks = null;
+  protected LocatedBlocks locatedBlocks = null;
   private long lastBlockBeingWrittenLength = 0;
   private FileEncryptionInfo fileEncryptionInfo = null;
-  private CachingStrategy cachingStrategy;
+  protected CachingStrategy cachingStrategy;
   
 
-  private final ReadStatistics readStatistics = new ReadStatistics();
+  protected final ReadStatistics readStatistics = new ReadStatistics();
   // lock for state shared between read and pread
   // Note: Never acquire a lock on this with this lock held to avoid 
deadlocks
   //   (it's OK to acquire this lock when the lock on this is held)
-  private final Object infoLock = new Object();
+  protected final Object infoLock = new Object();
 
   /**
* Track the ByteBuffers that we have handed out to readers.
@@ -239,7 +239,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
* back to the namenode to get a new list of block locations, and is
* capped at maxBlockAcquireFailures
*/
-  private int failures = 0;
+  protected int failures = 0;
 
   /* XXX Use of CocurrentHashMap is temp fix. Need to fix 
* parallel accesses to DFSInputStream (through ptreads) properly */
@@ -476,7 +476,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   }
 
   /** Fetch a block from namenode and cache it */
-  private void fetchBlockAt(long offset) throws IOException {
+  protected void fetchBlockAt(long 

[49/50] hadoop git commit: HADOOP-11938. Enhance ByteBuffer version encode/decode API of raw erasure coder. Contributed by Kai Zheng.

2015-05-18 Thread jing9
HADOOP-11938. Enhance ByteBuffer version encode/decode API of raw erasure 
coder. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/187af2cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/187af2cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/187af2cd

Branch: refs/heads/HDFS-7285
Commit: 187af2cdfb33b28c4c00154bb3874609826236d9
Parents: 4dc91e5
Author: Zhe Zhang z...@apache.org
Authored: Mon May 18 10:14:54 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:12 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   3 +
 .../apache/hadoop/io/erasurecode/ECChunk.java   |  35 ++---
 .../rawcoder/AbstractRawErasureCoder.java   |  77 +--
 .../rawcoder/AbstractRawErasureDecoder.java |  69 --
 .../rawcoder/AbstractRawErasureEncoder.java |  66 --
 .../io/erasurecode/rawcoder/RSRawDecoder.java   |  22 ++--
 .../io/erasurecode/rawcoder/RSRawEncoder.java   |  41 +++---
 .../io/erasurecode/rawcoder/XORRawDecoder.java  |  30 +++--
 .../io/erasurecode/rawcoder/XORRawEncoder.java  |  40 +++---
 .../erasurecode/rawcoder/util/GaloisField.java  | 112 
 .../hadoop/io/erasurecode/TestCoderBase.java| 131 +++
 .../erasurecode/coder/TestErasureCoderBase.java |  21 ++-
 .../io/erasurecode/rawcoder/TestRSRawCoder.java |  12 +-
 .../rawcoder/TestRSRawCoderBase.java|  12 +-
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  57 +++-
 .../erasurecode/rawcoder/TestXORRawCoder.java   |  19 +++
 16 files changed, 535 insertions(+), 212 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/187af2cd/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 34dfc9e..c799b4f 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -51,3 +51,6 @@
 
 HADOOP-11566. Add tests and fix for erasure coders to recover erased 
parity 
 units. (Kai Zheng via Zhe Zhang)
+
+HADOOP-11938. Enhance ByteBuffer version encode/decode API of raw erasure 
+coder. (Kai Zheng via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/187af2cd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index 69a8343..310c738 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -72,34 +72,15 @@ public class ECChunk {
   }
 
   /**
-   * Convert an array of this chunks to an array of byte array.
-   * Note the chunk buffers are not affected.
-   * @param chunks
-   * @return an array of byte array
+   * Convert to a bytes array, just for test usage.
+   * @return bytes array
*/
-  public static byte[][] toArrays(ECChunk[] chunks) {
-byte[][] bytesArr = new byte[chunks.length][];
-
-ByteBuffer buffer;
-ECChunk chunk;
-for (int i = 0; i  chunks.length; i++) {
-  chunk = chunks[i];
-  if (chunk == null) {
-bytesArr[i] = null;
-continue;
-  }
-
-  buffer = chunk.getBuffer();
-  if (buffer.hasArray()) {
-bytesArr[i] = buffer.array();
-  } else {
-bytesArr[i] = new byte[buffer.remaining()];
-// Avoid affecting the original one
-buffer.mark();
-buffer.get(bytesArr[i]);
-buffer.reset();
-  }
-}
+  public byte[] toBytesArray() {
+byte[] bytesArr = new byte[chunkBuffer.remaining()];
+// Avoid affecting the original one
+chunkBuffer.mark();
+chunkBuffer.get(bytesArr);
+chunkBuffer.reset();
 
 return bytesArr;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/187af2cd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
index 2400313..5268962 100644
--- 

[23/50] hadoop git commit: HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. Contributed by Yi Liu.

2015-05-18 Thread jing9
HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. Contributed by 
Yi Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1efb8ed4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1efb8ed4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1efb8ed4

Branch: refs/heads/HDFS-7285
Commit: 1efb8ed4bdc73f6f7602af8e74e4a74446a33497
Parents: dcfe095
Author: Zhe Zhang z...@apache.org
Authored: Tue May 5 16:33:56 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:08 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/BlockReader.java |   6 +
 .../apache/hadoop/hdfs/BlockReaderLocal.java|   5 +
 .../hadoop/hdfs/BlockReaderLocalLegacy.java |   5 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   6 +
 .../java/org/apache/hadoop/hdfs/DFSPacket.java  |  10 +-
 .../apache/hadoop/hdfs/RemoteBlockReader.java   |   5 +
 .../apache/hadoop/hdfs/RemoteBlockReader2.java  |   5 +
 .../hadoop/hdfs/server/datanode/DNConf.java |  27 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |  31 +-
 .../erasurecode/ErasureCodingWorker.java| 893 ++-
 .../hadoop/hdfs/util/StripedBlockUtil.java  |  49 +-
 .../src/main/resources/hdfs-default.xml |  31 +-
 .../hadoop/hdfs/TestRecoverStripedFile.java | 356 
 14 files changed, 1377 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1efb8ed4/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 7efaa5a..0d2d448 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -175,3 +175,6 @@
 
 HDFS-7672. Handle write failure for stripping blocks and refactor the
 existing code in DFSStripedOutputStream and StripedDataStreamer.  
(szetszwo)
+
+HDFS-7348. Erasure Coding: DataNode reconstruct striped blocks. 
+(Yi Liu via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1efb8ed4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
index aa3e8ba..0a5511e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.ByteBufferReadable;
 import org.apache.hadoop.fs.ReadOption;
 import org.apache.hadoop.hdfs.shortcircuit.ClientMmap;
+import org.apache.hadoop.util.DataChecksum;
 
 /**
  * A BlockReader is responsible for reading a single block
@@ -99,4 +100,9 @@ public interface BlockReader extends ByteBufferReadable {
*  supported.
*/
   ClientMmap getClientMmap(EnumSetReadOption opts);
+
+  /**
+   * @return  The DataChecksum used by the read block
+   */
+  DataChecksum getDataChecksum();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1efb8ed4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
index d913f3a..0b2420d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
@@ -738,4 +738,9 @@ class BlockReaderLocal implements BlockReader {
   void forceUnanchorable() {
 replica.getSlot().makeUnanchorable();
   }
+
+  @Override
+  public DataChecksum getDataChecksum() {
+return checksum;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1efb8ed4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
index c16ffdf..04cf733 100644

[05/50] hadoop git commit: HDFS-8156. Add/implement necessary APIs even we just have the system default schema. Contributed by Kai Zheng.

2015-05-18 Thread jing9
HDFS-8156. Add/implement necessary APIs even we just have the system default 
schema. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dbeaa011
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dbeaa011
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dbeaa011

Branch: refs/heads/HDFS-7285
Commit: dbeaa011b9d71616bf1390aee7766630edd8617a
Parents: 113f920
Author: Zhe Zhang z...@apache.org
Authored: Wed Apr 22 14:48:54 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon May 18 22:11:05 2015 -0700

--
 .../apache/hadoop/io/erasurecode/ECSchema.java  | 173 +++
 .../hadoop/io/erasurecode/TestECSchema.java |   2 +-
 .../hadoop/io/erasurecode/TestSchemaLoader.java |   6 +-
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |   2 +-
 .../hdfs/server/namenode/ECSchemaManager.java   |  79 -
 .../namenode/ErasureCodingZoneManager.java  |  16 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  29 +++-
 .../org/apache/hadoop/hdfs/TestECSchemas.java   |   5 +-
 .../hadoop/hdfs/TestErasureCodingZones.java |  45 +++--
 10 files changed, 249 insertions(+), 111 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbeaa011/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
index 32077f6..f058ea7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.io.erasurecode;
 
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Map;
 
 /**
@@ -30,55 +31,80 @@ public final class ECSchema {
   public static final String CHUNK_SIZE_KEY = chunkSize;
   public static final int DEFAULT_CHUNK_SIZE = 256 * 1024; // 256K
 
-  private String schemaName;
-  private String codecName;
-  private MapString, String options;
-  private int numDataUnits;
-  private int numParityUnits;
-  private int chunkSize;
+  /**
+   * A friendly and understandable name that can mean what's it, also serves as
+   * the identifier that distinguish it from other schemas.
+   */
+  private final String schemaName;
+
+  /**
+   * The erasure codec name associated.
+   */
+  private final String codecName;
+
+  /**
+   * Number of source data units coded
+   */
+  private final int numDataUnits;
+
+  /**
+   * Number of parity units generated in a coding
+   */
+  private final int numParityUnits;
+
+  /**
+   * Unit data size for each chunk in a coding
+   */
+  private final int chunkSize;
+
+  /*
+   * An erasure code can have its own specific advanced parameters, subject to
+   * itself to interpret these key-value settings.
+   */
+  private final MapString, String extraOptions;
 
   /**
-   * Constructor with schema name and provided options. Note the options may
+   * Constructor with schema name and provided all options. Note the options 
may
* contain additional information for the erasure codec to interpret further.
* @param schemaName schema name
-   * @param options schema options
+   * @param allOptions all schema options
*/
-  public ECSchema(String schemaName, MapString, String options) {
+  public ECSchema(String schemaName, MapString, String allOptions) {
 assert (schemaName != null  ! schemaName.isEmpty());
 
 this.schemaName = schemaName;
 
-if (options == null || options.isEmpty()) {
+if (allOptions == null || allOptions.isEmpty()) {
   throw new IllegalArgumentException(No schema options are provided);
 }
 
-String codecName = options.get(CODEC_NAME_KEY);
+this.codecName = allOptions.get(CODEC_NAME_KEY);
 if (codecName == null || codecName.isEmpty()) {
   throw new IllegalArgumentException(No codec option is provided);
 }
 
-int dataUnits = 0, parityUnits = 0;
-try {
-  if (options.containsKey(NUM_DATA_UNITS_KEY)) {
-dataUnits = Integer.parseInt(options.get(NUM_DATA_UNITS_KEY));
-  }
-} catch (NumberFormatException e) {
-  throw new IllegalArgumentException(Option value  +
-  options.get(NUM_DATA_UNITS_KEY) +  for  + NUM_DATA_UNITS_KEY +
-   is found. It should be an integer);
+int tmpNumDataUnits = extractIntOption(NUM_DATA_UNITS_KEY, allOptions);
+int tmpNumParityUnits = extractIntOption(NUM_PARITY_UNITS_KEY, allOptions);

  1   2   >