[hadoop] branch trunk updated (3ccc962 -> 3fc007a)

2020-09-24 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 3ccc962  HDFS-15596: ViewHDFS#create(f, permission, cflags, 
bufferSize, replication, blockSize, progress, checksumOpt) should not be 
restricted to DFS only. (#2333). Contributed by Uma Maheswara Rao G.
 add 3fc007a  HADOOP-17282. libzstd-dev should be used instead of 
libzstd1-dev on Ubuntu 18.04 or higher. (#2336)

No new revisions were added by this update.

Summary of changes:
 dev-support/docker/Dockerfile | 2 +-
 dev-support/docker/Dockerfile_aarch64 | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (3ccc962 -> 3fc007a)

2020-09-24 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 3ccc962  HDFS-15596: ViewHDFS#create(f, permission, cflags, 
bufferSize, replication, blockSize, progress, checksumOpt) should not be 
restricted to DFS only. (#2333). Contributed by Uma Maheswara Rao G.
 add 3fc007a  HADOOP-17282. libzstd-dev should be used instead of 
libzstd1-dev on Ubuntu 18.04 or higher. (#2336)

No new revisions were added by this update.

Summary of changes:
 dev-support/docker/Dockerfile | 2 +-
 dev-support/docker/Dockerfile_aarch64 | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15596: ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only. (#2333). Contributed

2020-09-24 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3ccc962  HDFS-15596: ViewHDFS#create(f, permission, cflags, 
bufferSize, replication, blockSize, progress, checksumOpt) should not be 
restricted to DFS only. (#2333). Contributed by Uma Maheswara Rao G.
3ccc962 is described below

commit 3ccc962b990f7f24e9b430b86da6f93be9ac554e
Author: Uma Maheswara Rao G 
AuthorDate: Thu Sep 24 07:07:48 2020 -0700

HDFS-15596: ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
blockSize, progress, checksumOpt) should not be restricted to DFS only. 
(#2333). Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
---
 .../java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java   | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
index 4fee963..2894a24 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
@@ -376,7 +376,6 @@ public class ViewDistributedFileSystem extends 
DistributedFileSystem {
   }
 
   @Override
-  //DFS specific API
   public FSDataOutputStream create(final Path f, final FsPermission permission,
   final EnumSet cflags, final int bufferSize,
   final short replication, final long blockSize,
@@ -387,12 +386,8 @@ public class ViewDistributedFileSystem extends 
DistributedFileSystem {
   .create(f, permission, cflags, bufferSize, replication, blockSize,
   progress, checksumOpt);
 }
-ViewFileSystemOverloadScheme.MountPathInfo mountPathInfo =
-this.vfs.getMountPathInfo(f, getConf());
-checkDFS(mountPathInfo.getTargetFs(), "create");
-return mountPathInfo.getTargetFs()
-.create(mountPathInfo.getPathOnTarget(), permission, cflags, 
bufferSize,
-replication, blockSize, progress, checksumOpt);
+return vfs.create(f, permission, cflags, bufferSize, replication, 
blockSize,
+progress, checksumOpt);
   }
 
   void checkDFS(FileSystem fs, String methodName) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (ff59fbb -> 486ddb7)

2020-09-24 Thread tasanuma
This is an automated email from the ASF dual-hosted git repository.

tasanuma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from ff59fbb  HDFS-15025. Applying NVDIMM storage media to HDFS (#2189)
 add 486ddb7  HADOOP-17283. Hadoop - Upgrade to jQuery 3.5.1 (#2330)

No new revisions were added by this update.

Summary of changes:
 LICENSE-binary| 2 +-
 LICENSE.txt   | 2 +-
 .../hadoop-hdfs-rbf/src/main/webapps/router/explorer.html | 2 +-
 .../hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html | 2 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml   | 2 +-
 .../hadoop-hdfs/src/main/webapps/datanode/datanode.html   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html  | 2 +-
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html   | 2 +-
 .../hadoop-hdfs/src/main/webapps/journal/journalnode.html | 4 ++--
 .../hadoop-hdfs/src/main/webapps/secondary/status.html| 2 +-
 .../hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js   | 2 --
 .../hadoop-hdfs/src/main/webapps/static/jquery-3.5.1.min.js   | 2 ++
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml| 2 +-
 .../src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java| 2 +-
 14 files changed, 16 insertions(+), 16 deletions(-)
 delete mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.5.1.min.js


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (368f2f6 -> ff59fbb)

2020-09-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 368f2f6  HDFS-15590. namenode fails to start when ordered snapshot 
deletion feature is disabled (#2326)
 add ff59fbb  HDFS-15025. Applying NVDIMM storage media to HDFS (#2189)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/fs/StorageType.java |  19 +-
 .../java/org/apache/hadoop/fs/shell/Count.java |   2 +-
 .../java/org/apache/hadoop/fs/shell/TestCount.java |   4 +-
 .../apache/hadoop/hdfs/protocol/HdfsConstants.java |   5 +
 .../hadoop/hdfs/protocolPB/PBHelperClient.java |   4 +
 .../hadoop-hdfs-client/src/main/proto/hdfs.proto   |   1 +
 .../blockmanagement/BlockStoragePolicySuite.java   |   6 +
 .../server/datanode/fsdataset/FsVolumeSpi.java |   3 +
 .../datanode/fsdataset/impl/FsDatasetImpl.java |   6 +-
 .../datanode/fsdataset/impl/FsVolumeImpl.java  |   7 +-
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java |   6 +-
 .../src/main/resources/hdfs-default.xml|   6 +-
 .../src/site/markdown/ArchivalStorage.md   |  13 +-
 .../src/site/markdown/HdfsQuotaAdminGuide.md   |   6 +-
 .../hadoop-hdfs/src/site/markdown/WebHDFS.md   |   8 +
 .../apache/hadoop/hdfs/TestBlockStoragePolicy.java | 194 +
 .../hadoop/hdfs/net/TestDFSNetworkTopology.java| 132 +-
 .../hadoop/hdfs/protocolPB/TestPBHelper.java   |   8 +-
 .../hdfs/security/token/block/TestBlockToken.java  |   2 +-
 .../blockmanagement/TestBlockStatsMXBean.java  |  47 +++--
 .../blockmanagement/TestDatanodeManager.java   |   9 +-
 .../hdfs/server/datanode/SimulatedFSDataset.java   |   5 +
 .../hadoop/hdfs/server/datanode/TestDataDirs.java  |   8 +-
 .../hdfs/server/datanode/TestDirectoryScanner.java |   5 +
 .../datanode/extdataset/ExternalVolumeImpl.java|   5 +
 .../datanode/fsdataset/impl/TestFsVolumeList.java  |  26 +++
 .../impl/TestReservedSpaceCalculator.java  |  17 ++
 .../namenode/TestNamenodeStorageDirectives.java|  24 ++-
 .../sps/TestExternalStoragePolicySatisfier.java|  27 +++
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java|   4 +
 30 files changed, 436 insertions(+), 173 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15590. namenode fails to start when ordered snapshot deletion feature is disabled (#2326)

2020-09-24 Thread shashikant
This is an automated email from the ASF dual-hosted git repository.

shashikant pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 368f2f6  HDFS-15590. namenode fails to start when ordered snapshot 
deletion feature is disabled (#2326)
368f2f6 is described below

commit 368f2f637e8dfeecdda8db2dbb1445beac053ac2
Author: bshashikant 
AuthorDate: Thu Sep 24 14:00:41 2020 +0530

HDFS-15590. namenode fails to start when ordered snapshot deletion feature 
is disabled (#2326)
---
 .../snapshot/DirectorySnapshottableFeature.java  | 18 ++
 .../server/namenode/snapshot/SnapshotManager.java|  9 ++---
 .../org/apache/hadoop/hdfs/TestSnapshotCommands.java |  4 ++--
 .../snapshot/TestOrderedSnapshotDeletion.java| 20 
 4 files changed, 46 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
index 7a47ab4..8a215b5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
@@ -241,6 +241,24 @@ public class DirectorySnapshottableFeature extends 
DirectoryWithSnapshotFeature
   throws SnapshotException {
 final int i = searchSnapshot(DFSUtil.string2Bytes(snapshotName));
 if (i < 0) {
+  // considering a sequence like this with snapshots S1 and s2
+  // 1. Ordered snapshot deletion feature is turned on
+  // 2. Delete S2 creating edit log entry for S2 deletion
+  // 3. Delete S1
+  // 4. S2 gets deleted by snapshot gc thread creating edit log record for
+  //S2 deletion again
+  // 5. Disable Ordered snapshot deletion feature
+  // 6. Restarting Namenode
+  // In this case, when edit log replay happens actual deletion of S2
+  // will happen when first edit log for S2 deletion gets replayed and
+  // the second edit log record replay for S2 deletion will fail as 
snapshot
+  // won't exist thereby failing the Namenode start
+  // The idea here is to check during edit log replay, if a certain 
snapshot
+  // is not found and the ordered snapshot deletion is off, ignore the 
error
+  if (!snapshotManager.isSnapshotDeletionOrdered() &&
+  !snapshotManager.isImageLoaded()) {
+return null;
+  }
   throw new SnapshotException("Cannot delete snapshot " + snapshotName
   + " from path " + snapshotRoot.getFullPathName()
   + ": the snapshot does not exist.");
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
index 04d6b71..2c183f7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
@@ -479,10 +479,10 @@ public class SnapshotManager implements 
SnapshotStatsMXBean {
   void checkSnapshotLimit(int limit, int snapshotCount, String type)
   throws SnapshotException {
 if (snapshotCount >= limit) {
-  String msg = "there are already " + (snapshotCount + 1)
+  String msg = "there are already " + snapshotCount
   + " snapshot(s) and the "  + type + " snapshot limit is "
   + limit;
-  if (fsdir.isImageLoaded()) {
+  if (isImageLoaded()) {
 // We have reached the maximum snapshot limit
 throw new SnapshotException(
 "Failed to create snapshot: " + msg);
@@ -492,7 +492,10 @@ public class SnapshotManager implements 
SnapshotStatsMXBean {
   }
 }
   }
-  
+
+  boolean isImageLoaded() {
+return fsdir.isImageLoaded();
+  }
   /**
* Delete a snapshot for a snapshottable directory
* @param snapshotName Name of the snapshot to be deleted
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSnapshotCommands.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSnapshotCommands.java
index 2b5a69d..32ac298 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSnapshotCommands.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSnapshotCommands.java
@@ -128,8 +128,8 @@ public class TestSnapshotCommands {
 DFSTestUtil.FsShellRun("-createSnapshot /sub3 sn2", 0,
 "Created snapshot