[hadoop] branch trunk updated: HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). Contributed by Uma Maheswara Rao G.

2020-09-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e4cb0d3  HDFS-15578: Fix the rename issues with fallback fs enabled 
(#2305). Contributed by Uma Maheswara Rao G.
e4cb0d3 is described below

commit e4cb0d351450dba10cd6a0a6d999cc4423f1c2a9
Author: Uma Maheswara Rao G 
AuthorDate: Wed Sep 16 22:43:00 2020 -0700

HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). 
Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  24 +++--
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  52 +--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  59 +---
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java|   4 +-
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 101 +
 ...estViewDistributedFileSystemWithMountLinks.java |  95 ++-
 7 files changed, 307 insertions(+), 32 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index fceb73a..2a38693 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -706,19 +706,27 @@ abstract class InodeTree {
 final T targetFileSystem;
 final String resolvedPath;
 final Path remainingPath;   // to resolve in the target FileSystem
+private final boolean isLastInternalDirLink;
 
 ResolveResult(final ResultKind k, final T targetFs, final String resolveP,
-final Path remainingP) {
+final Path remainingP, boolean isLastIntenalDirLink) {
   kind = k;
   targetFileSystem = targetFs;
   resolvedPath = resolveP;
   remainingPath = remainingP;
+  this.isLastInternalDirLink = isLastIntenalDirLink;
 }
 
 // Internal dir path resolution completed within the mount table
 boolean isInternalDir() {
   return (kind == ResultKind.INTERNAL_DIR);
 }
+
+// Indicates whether the internal dir path resolution completed at the link
+// or resolved due to fallback.
+boolean isLastInternalDirLink() {
+  return this.isLastInternalDirLink;
+}
   }
 
   /**
@@ -737,7 +745,7 @@ abstract class InodeTree {
   getRootDir().getInternalDirFs()
   : getRootLink().getTargetFileSystem();
   resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-  targetFs, root.fullPath, SlashPath);
+  targetFs, root.fullPath, SlashPath, false);
   return resolveResult;
 }
 
@@ -755,7 +763,8 @@ abstract class InodeTree {
   }
   remainingPath = new Path(remainingPathStr.toString());
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath);
+  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath,
+  true);
   return resolveResult;
 }
 Preconditions.checkState(root.isInternalDir());
@@ -775,7 +784,7 @@ abstract class InodeTree {
 if (hasFallbackLink()) {
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
   getRootFallbackLink().getTargetFileSystem(), root.fullPath,
-  new Path(p));
+  new Path(p), false);
   return resolveResult;
 } else {
   StringBuilder failedAt = new StringBuilder(path[0]);
@@ -801,7 +810,8 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-link.getTargetFileSystem(), nextInode.fullPath, remainingPath);
+link.getTargetFileSystem(), nextInode.fullPath, remainingPath,
+true);
 return resolveResult;
   } else if (nextInode.isInternalDir()) {
 curInode = (INodeDir) nextInode;
@@ -824,7 +834,7 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-curInode.getInternalDirFs(), curInode.fullPath, remainingPath);
+curInode.getInternalDirFs(), curInode.fullPath, remainingPath, false);
 return resolveResult;
   }
 
@@ -874,7 +884,7 @@ abstract class InodeTree {
   T targetFs = getTargetFileSystem(
   new URI(targetOfResolvedPathStr));
   return new ResolveResult(resultKind, targetFs, resolvedPathStr,
-  remainingPath);
+  remainingPath, true);
 } catch (IOException ex) {
   

[hadoop] branch trunk updated: HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). Contributed by Uma Maheswara Rao G.

2020-09-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e4cb0d3  HDFS-15578: Fix the rename issues with fallback fs enabled 
(#2305). Contributed by Uma Maheswara Rao G.
e4cb0d3 is described below

commit e4cb0d351450dba10cd6a0a6d999cc4423f1c2a9
Author: Uma Maheswara Rao G 
AuthorDate: Wed Sep 16 22:43:00 2020 -0700

HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). 
Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  24 +++--
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  52 +--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  59 +---
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java|   4 +-
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 101 +
 ...estViewDistributedFileSystemWithMountLinks.java |  95 ++-
 7 files changed, 307 insertions(+), 32 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index fceb73a..2a38693 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -706,19 +706,27 @@ abstract class InodeTree {
 final T targetFileSystem;
 final String resolvedPath;
 final Path remainingPath;   // to resolve in the target FileSystem
+private final boolean isLastInternalDirLink;
 
 ResolveResult(final ResultKind k, final T targetFs, final String resolveP,
-final Path remainingP) {
+final Path remainingP, boolean isLastIntenalDirLink) {
   kind = k;
   targetFileSystem = targetFs;
   resolvedPath = resolveP;
   remainingPath = remainingP;
+  this.isLastInternalDirLink = isLastIntenalDirLink;
 }
 
 // Internal dir path resolution completed within the mount table
 boolean isInternalDir() {
   return (kind == ResultKind.INTERNAL_DIR);
 }
+
+// Indicates whether the internal dir path resolution completed at the link
+// or resolved due to fallback.
+boolean isLastInternalDirLink() {
+  return this.isLastInternalDirLink;
+}
   }
 
   /**
@@ -737,7 +745,7 @@ abstract class InodeTree {
   getRootDir().getInternalDirFs()
   : getRootLink().getTargetFileSystem();
   resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-  targetFs, root.fullPath, SlashPath);
+  targetFs, root.fullPath, SlashPath, false);
   return resolveResult;
 }
 
@@ -755,7 +763,8 @@ abstract class InodeTree {
   }
   remainingPath = new Path(remainingPathStr.toString());
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath);
+  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath,
+  true);
   return resolveResult;
 }
 Preconditions.checkState(root.isInternalDir());
@@ -775,7 +784,7 @@ abstract class InodeTree {
 if (hasFallbackLink()) {
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
   getRootFallbackLink().getTargetFileSystem(), root.fullPath,
-  new Path(p));
+  new Path(p), false);
   return resolveResult;
 } else {
   StringBuilder failedAt = new StringBuilder(path[0]);
@@ -801,7 +810,8 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-link.getTargetFileSystem(), nextInode.fullPath, remainingPath);
+link.getTargetFileSystem(), nextInode.fullPath, remainingPath,
+true);
 return resolveResult;
   } else if (nextInode.isInternalDir()) {
 curInode = (INodeDir) nextInode;
@@ -824,7 +834,7 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-curInode.getInternalDirFs(), curInode.fullPath, remainingPath);
+curInode.getInternalDirFs(), curInode.fullPath, remainingPath, false);
 return resolveResult;
   }
 
@@ -874,7 +884,7 @@ abstract class InodeTree {
   T targetFs = getTargetFileSystem(
   new URI(targetOfResolvedPathStr));
   return new ResolveResult(resultKind, targetFs, resolvedPathStr,
-  remainingPath);
+  remainingPath, true);
 } catch (IOException ex) {
   

[hadoop] branch branch-3.1 updated: HDFS-15574. Remove unnecessary sort of block list in DirectoryScanner. Contributed by Stephen O'Donnell.

2020-09-16 Thread hemanthboyina
This is an automated email from the ASF dual-hosted git repository.

hemanthboyina pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 43b113d  HDFS-15574. Remove unnecessary sort of block list in 
DirectoryScanner. Contributed by Stephen O'Donnell.
43b113d is described below

commit 43b113de69a0f54aa3577f99550468dd26c95490
Author: hemanthboyina 
AuthorDate: Thu Sep 17 10:15:18 2020 +0530

HDFS-15574. Remove unnecessary sort of block list in DirectoryScanner. 
Contributed by Stephen O'Donnell.

(cherry picked from commit aa582ccc2a2d7fa08cfa1a04d4cfa28c40183f14)
---
 .../hdfs/server/datanode/DirectoryScanner.java |  6 ++--
 .../server/datanode/fsdataset/FsDatasetSpi.java|  7 +++--
 .../datanode/fsdataset/impl/FsDatasetImpl.java |  7 +++--
 .../org/apache/hadoop/hdfs/TestCrcCorruption.java  |  2 +-
 .../hadoop/hdfs/TestReconstructStripedFile.java|  2 +-
 .../hdfs/server/datanode/SimulatedFSDataset.java   |  2 +-
 .../datanode/extdataset/ExternalDatasetImpl.java   |  2 +-
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java | 36 ++
 8 files changed, 50 insertions(+), 14 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index ab9743c..40a4cb9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -22,7 +22,6 @@ import java.io.File;
 import java.io.FilenameFilter;
 import java.io.IOException;
 import java.util.Arrays;
-import java.util.Collections;
 import java.util.HashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -404,9 +403,8 @@ public class DirectoryScanner implements Runnable {
 diffs.put(bpid, diffRecord);
 
 statsRecord.totalBlocks = blockpoolReport.length;
-final List bl = dataset.getFinalizedBlocks(bpid);
-Collections.sort(bl); // Sort based on blockId
-  
+final List bl = dataset.getSortedFinalizedBlocks(bpid);
+
 int d = 0; // index for blockpoolReport
 int m = 0; // index for memReprot
 while (m < bl.size() && d < blockpoolReport.length) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 578c390..b11f05f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -237,16 +237,17 @@ public interface FsDatasetSpi 
extends FSDatasetMBean {
   VolumeFailureSummary getVolumeFailureSummary();
 
   /**
-   * Gets a list of references to the finalized blocks for the given block 
pool.
+   * Gets a sorted list of references to the finalized blocks for the given
+   * block pool. The list is sorted by blockID.
* 
* Callers of this function should call
* {@link FsDatasetSpi#acquireDatasetLock} to avoid blocks' status being
* changed during list iteration.
* 
* @return a list of references to the finalized blocks for the given block
-   * pool.
+   * pool. The list is sorted by blockID.
*/
-  List getFinalizedBlocks(String bpid);
+  List getSortedFinalizedBlocks(String bpid);
 
   /**
* Check whether the in-memory block record matches the block on the disk,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 52c1b9d..7c03bd8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1918,17 +1918,18 @@ class FsDatasetImpl implements 
FsDatasetSpi {
   }
 
   /**
-   * Gets a list of references to the finalized blocks for the given block 
pool.
+   * Gets a list of references to the finalized blocks for the given block 
pool,
+   * sorted by blockID.
* 
* Callers of this function should call
* {@link FsDatasetSpi#acquireDatasetLock} to avoid blocks' status being
* changed during list iteration.
* 
* @return a list of references to the finalized blocks for the given block
-   * pool.
+   * pool. The 

[hadoop] branch branch-3.2 updated: HDFS-15574. Remove unnecessary sort of block list in DirectoryScanner. Contributed by Stephen O'Donnell.

2020-09-16 Thread hemanthboyina
This is an automated email from the ASF dual-hosted git repository.

hemanthboyina pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new aa582cc  HDFS-15574. Remove unnecessary sort of block list in 
DirectoryScanner. Contributed by Stephen O'Donnell.
aa582cc is described below

commit aa582ccc2a2d7fa08cfa1a04d4cfa28c40183f14
Author: hemanthboyina 
AuthorDate: Thu Sep 17 10:15:18 2020 +0530

HDFS-15574. Remove unnecessary sort of block list in DirectoryScanner. 
Contributed by Stephen O'Donnell.
---
 .../hdfs/server/datanode/DirectoryScanner.java |  6 ++--
 .../server/datanode/fsdataset/FsDatasetSpi.java|  7 +++--
 .../datanode/fsdataset/impl/FsDatasetImpl.java |  7 +++--
 .../org/apache/hadoop/hdfs/TestCrcCorruption.java  |  2 +-
 .../hadoop/hdfs/TestReconstructStripedFile.java|  2 +-
 .../hdfs/server/datanode/SimulatedFSDataset.java   |  2 +-
 .../datanode/extdataset/ExternalDatasetImpl.java   |  2 +-
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java | 36 ++
 8 files changed, 50 insertions(+), 14 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 99584d9..aede03e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -22,7 +22,6 @@ import java.io.File;
 import java.io.FilenameFilter;
 import java.io.IOException;
 import java.util.Arrays;
-import java.util.Collections;
 import java.util.HashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -405,9 +404,8 @@ public class DirectoryScanner implements Runnable {
 diffs.put(bpid, diffRecord);
 
 statsRecord.totalBlocks = blockpoolReport.length;
-final List bl = dataset.getFinalizedBlocks(bpid);
-Collections.sort(bl); // Sort based on blockId
-  
+final List bl = dataset.getSortedFinalizedBlocks(bpid);
+
 int d = 0; // index for blockpoolReport
 int m = 0; // index for memReprot
 while (m < bl.size() && d < blockpoolReport.length) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 78a5cfc..cddb7e7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -237,16 +237,17 @@ public interface FsDatasetSpi 
extends FSDatasetMBean {
   VolumeFailureSummary getVolumeFailureSummary();
 
   /**
-   * Gets a list of references to the finalized blocks for the given block 
pool.
+   * Gets a sorted list of references to the finalized blocks for the given
+   * block pool. The list is sorted by blockID.
* 
* Callers of this function should call
* {@link FsDatasetSpi#acquireDatasetLock} to avoid blocks' status being
* changed during list iteration.
* 
* @return a list of references to the finalized blocks for the given block
-   * pool.
+   * pool. The list is sorted by blockID.
*/
-  List getFinalizedBlocks(String bpid);
+  List getSortedFinalizedBlocks(String bpid);
 
   /**
* Check whether the in-memory block record matches the block on the disk,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 7998ff4..4b60d67 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1920,17 +1920,18 @@ class FsDatasetImpl implements 
FsDatasetSpi {
   }
 
   /**
-   * Gets a list of references to the finalized blocks for the given block 
pool.
+   * Gets a list of references to the finalized blocks for the given block 
pool,
+   * sorted by blockID.
* 
* Callers of this function should call
* {@link FsDatasetSpi#acquireDatasetLock} to avoid blocks' status being
* changed during list iteration.
* 
* @return a list of references to the finalized blocks for the given block
-   * pool.
+   * pool. The list is sorted by blockID.
*/
   @Override
-  public List 

[hadoop] branch branch-3.3 updated: HDFS-15574. Remove unnecessary sort of block list in DirectoryScanner. Contributed by Stephen O'Donnell.

2020-09-16 Thread hemanthboyina
This is an automated email from the ASF dual-hosted git repository.

hemanthboyina pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 94e5c52  HDFS-15574. Remove unnecessary sort of block list in 
DirectoryScanner. Contributed by Stephen O'Donnell.
94e5c52 is described below

commit 94e5c5257f1aa42a1c7e18b7eebf0bbfd2df070f
Author: hemanthboyina 
AuthorDate: Thu Sep 17 09:40:36 2020 +0530

HDFS-15574. Remove unnecessary sort of block list in DirectoryScanner. 
Contributed by Stephen O'Donnell.
---
 .../hdfs/server/datanode/DirectoryScanner.java |  3 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java|  7 +++--
 .../datanode/fsdataset/impl/FsDatasetImpl.java |  7 +++--
 .../org/apache/hadoop/hdfs/TestCrcCorruption.java  |  2 +-
 .../hadoop/hdfs/TestReconstructStripedFile.java|  2 +-
 .../hdfs/server/datanode/SimulatedFSDataset.java   |  2 +-
 .../datanode/extdataset/ExternalDatasetImpl.java   |  2 +-
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java | 36 ++
 8 files changed, 49 insertions(+), 12 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 35625ce..bbf12ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -482,8 +482,7 @@ public class DirectoryScanner implements Runnable {
 Collection diffRecord = new ArrayList<>();
 
 statsRecord.totalBlocks = blockpoolReport.size();
-final List bl = dataset.getFinalizedBlocks(bpid);
-Collections.sort(bl); // Sort based on blockId
+final List bl = dataset.getSortedFinalizedBlocks(bpid);
 
 int d = 0; // index for blockpoolReport
 int m = 0; // index for memReprot
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 2e5135d..854953a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -237,16 +237,17 @@ public interface FsDatasetSpi 
extends FSDatasetMBean {
   VolumeFailureSummary getVolumeFailureSummary();
 
   /**
-   * Gets a list of references to the finalized blocks for the given block 
pool.
+   * Gets a sorted list of references to the finalized blocks for the given
+   * block pool. The list is sorted by blockID.
* 
* Callers of this function should call
* {@link FsDatasetSpi#acquireDatasetLock} to avoid blocks' status being
* changed during list iteration.
* 
* @return a list of references to the finalized blocks for the given block
-   * pool.
+   * pool. The list is sorted by blockID.
*/
-  List getFinalizedBlocks(String bpid);
+  List getSortedFinalizedBlocks(String bpid);
 
   /**
* Check whether the in-memory block record matches the block on the disk,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 5833cb1..99a1765 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1936,17 +1936,18 @@ class FsDatasetImpl implements 
FsDatasetSpi {
   }
 
   /**
-   * Gets a list of references to the finalized blocks for the given block 
pool.
+   * Gets a list of references to the finalized blocks for the given block 
pool,
+   * sorted by blockID.
* 
* Callers of this function should call
* {@link FsDatasetSpi#acquireDatasetLock()} to avoid blocks' status being
* changed during list iteration.
* 
* @return a list of references to the finalized blocks for the given block
-   * pool.
+   * pool. The list is sorted by blockID.
*/
   @Override
-  public List getFinalizedBlocks(String bpid) {
+  public List getSortedFinalizedBlocks(String bpid) {
 try (AutoCloseableLock lock = datasetWriteLock.acquire()) {
   final List finalized = new ArrayList(
   volumeMap.size(bpid));
diff --git 

[hadoop] branch branch-3.1 updated: Revert "HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)"

2020-09-16 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 3c932dd  Revert "HADOOP-17246. Fix build the hadoop-build Docker image 
failed (#2277)"
3c932dd is described below

commit 3c932dd3d5d722c3fccaaf6f57a64cf3f70d66f7
Author: Akira Ajisaka 
AuthorDate: Wed Sep 16 16:47:08 2020 +0900

Revert "HADOOP-17246. Fix build the hadoop-build Docker image failed 
(#2277)"

This reverts commit 63c8e39a2ca9e768ab2b39877e596ba88b8b321a.
---
 dev-support/docker/Dockerfile | 2 --
 1 file changed, 2 deletions(-)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 145db61..3b8ff78 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -156,8 +156,6 @@ RUN apt-get -q update && apt-get -q install -y bats
 # https://github.com/PyCQA/pylint/issues/2294
 
 RUN pip2 install \
-astroid==1.6.6 \
-isort==4.3.21 \
 configparser==4.0.2 \
 pylint==1.9.2 \
 isort==4.3.21


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: Revert "HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)"

2020-09-16 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 875219b  Revert "HADOOP-17246. Fix build the hadoop-build Docker image 
failed (#2277)"
875219b is described below

commit 875219bc8e0bca7e7e1a8680e47498943c4c57a2
Author: Akira Ajisaka 
AuthorDate: Wed Sep 16 16:46:37 2020 +0900

Revert "HADOOP-17246. Fix build the hadoop-build Docker image failed 
(#2277)"

This reverts commit ffc101a27be197a63bab29cf916748289e3baa04.
---
 dev-support/docker/Dockerfile | 2 --
 1 file changed, 2 deletions(-)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index caa3bb6..5c22f1e 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -167,8 +167,6 @@ RUN apt-get -q update \
 # https://github.com/PyCQA/pylint/issues/2294
 
 RUN pip2 install \
-astroid==1.6.6 \
-isort==4.3.21 \
 configparser==4.0.2 \
 pylint==1.9.2 \
 isort==4.3.21


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17246. Addendum patch for branch-3.3.

2020-09-16 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 74c0764  HADOOP-17246. Addendum patch for branch-3.3.
74c0764 is described below

commit 74c0764343b68b333b8b27ea7b474838558a1962
Author: Akira Ajisaka 
AuthorDate: Wed Sep 16 16:45:02 2020 +0900

HADOOP-17246. Addendum patch for branch-3.3.
---
 dev-support/docker/Dockerfile | 1 -
 1 file changed, 1 deletion(-)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 53aaa86..2833606 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -168,7 +168,6 @@ RUN apt-get -q update \
 
 RUN pip2 install \
 astroid==1.6.6 \
-isort==4.3.21 \
 configparser==4.0.2 \
 pylint==1.9.2 \
 isort==4.3.21


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)

2020-09-16 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 63c8e39  HADOOP-17246. Fix build the hadoop-build Docker image failed 
(#2277)
63c8e39 is described below

commit 63c8e39a2ca9e768ab2b39877e596ba88b8b321a
Author: Wanqiang Ji 
AuthorDate: Wed Sep 16 15:23:57 2020 +0800

HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)

(cherry picked from commit ce861836918c0c8e6f0294827e82e90edc984ec3)

 Conflicts:
dev-support/docker/Dockerfile_aarch64

(cherry picked from commit ffc101a27be197a63bab29cf916748289e3baa04)
---
 dev-support/docker/Dockerfile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 3b8ff78..145db61 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -156,6 +156,8 @@ RUN apt-get -q update && apt-get -q install -y bats
 # https://github.com/PyCQA/pylint/issues/2294
 
 RUN pip2 install \
+astroid==1.6.6 \
+isort==4.3.21 \
 configparser==4.0.2 \
 pylint==1.9.2 \
 isort==4.3.21


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)

2020-09-16 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new cda7d6c  HADOOP-17246. Fix build the hadoop-build Docker image failed 
(#2277)
cda7d6c is described below

commit cda7d6ca85bd21085234ea2cd52d1c9371f11ae5
Author: Wanqiang Ji 
AuthorDate: Wed Sep 16 15:23:57 2020 +0800

HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)

(cherry picked from commit ce861836918c0c8e6f0294827e82e90edc984ec3)
---
 dev-support/docker/Dockerfile | 2 ++
 dev-support/docker/Dockerfile_aarch64 | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index de2cbc6..53aaa86 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -167,6 +167,8 @@ RUN apt-get -q update \
 # https://github.com/PyCQA/pylint/issues/2294
 
 RUN pip2 install \
+astroid==1.6.6 \
+isort==4.3.21 \
 configparser==4.0.2 \
 pylint==1.9.2 \
 isort==4.3.21
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index 511a451..ab588ed 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -175,6 +175,8 @@ RUN apt-get -q update \
 # https://github.com/PyCQA/pylint/issues/2294
 
 RUN pip2 install \
+astroid==1.6.6 \
+isort==4.3.21 \
 configparser==4.0.2 \
 pylint==1.9.2
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)

2020-09-16 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new ffc101a  HADOOP-17246. Fix build the hadoop-build Docker image failed 
(#2277)
ffc101a is described below

commit ffc101a27be197a63bab29cf916748289e3baa04
Author: Wanqiang Ji 
AuthorDate: Wed Sep 16 15:23:57 2020 +0800

HADOOP-17246. Fix build the hadoop-build Docker image failed (#2277)

(cherry picked from commit ce861836918c0c8e6f0294827e82e90edc984ec3)

 Conflicts:
dev-support/docker/Dockerfile_aarch64
---
 dev-support/docker/Dockerfile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 5c22f1e..caa3bb6 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -167,6 +167,8 @@ RUN apt-get -q update \
 # https://github.com/PyCQA/pylint/issues/2294
 
 RUN pip2 install \
+astroid==1.6.6 \
+isort==4.3.21 \
 configparser==4.0.2 \
 pylint==1.9.2 \
 isort==4.3.21


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (5c5b2ed -> ce86183)

2020-09-16 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 5c5b2ed  HDFS-15576. Erasure Coding: Add rs and rs-legacy codec test 
for addPolicies. Contributed by Fei Hui.
 add ce86183  HADOOP-17246. Fix build the hadoop-build Docker image failed 
(#2277)

No new revisions were added by this update.

Summary of changes:
 dev-support/docker/Dockerfile | 2 ++
 dev-support/docker/Dockerfile_aarch64 | 2 ++
 2 files changed, 4 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org