[hadoop] branch trunk updated (b1fc00d4b22 -> b9712223722)

2023-07-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from b1fc00d4b22 YARN-11539. Fix leaf-templates in Flexible AQC. (#5868)
 add b9712223722 HDFS-17120. Support snapshot diff based copylisting for 
flat paths. (#5885)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/tools/DistCpConstants.java   |  4 ++
 .../apache/hadoop/tools/GlobbedCopyListing.java|  2 +-
 .../org/apache/hadoop/tools/RegexCopyFilter.java   |  1 +
 .../org/apache/hadoop/tools/SimpleCopyListing.java | 70 +++---
 .../org/apache/hadoop/tools/TestCopyListing.java   | 68 +
 5 files changed, 122 insertions(+), 23 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (3e2ae1da00e -> 74ddf69f808)

2023-04-10 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 3e2ae1da00e HDFS-16949 Introduce inverse quantiles for metrics where 
higher numer… (#5495)
 add 74ddf69f808 HDFS-16911. Distcp with snapshot diff to support Ozone 
filesystem. (#5364)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/tools/DistCpSync.java   | 110 ++---
 .../org/apache/hadoop/tools/TestDistCpSync.java|  67 +
 2 files changed, 140 insertions(+), 37 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-18306: Warnings should not be shown on cli console when linux user not present on client (#4474). Contributed by swamirishi.

2022-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 43112bd4726 HADOOP-18306: Warnings should not be shown on cli console 
when linux user not present on client (#4474). Contributed by swamirishi.
43112bd4726 is described below

commit 43112bd472661b4044808210a77ae938a120934f
Author: swamirishi <47532440+swamiri...@users.noreply.github.com>
AuthorDate: Mon Jun 27 17:20:58 2022 -0700

HADOOP-18306: Warnings should not be shown on cli console when linux user 
not present on client (#4474). Contributed by swamirishi.
---
 .../java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java
index f4db520ac24..d0c4e11cbef 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java
@@ -215,7 +215,7 @@ public class ShellBasedUnixGroupsMapping extends Configured
   groups = resolvePartialGroupNames(user, e.getMessage(),
   executor.getOutput());
 } catch (PartialGroupNameException pge) {
-  LOG.warn("unable to return groups for user {}", user, pge);
+  LOG.debug("unable to return groups for user {}", user, pge);
   return EMPTY_GROUPS_SET;
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDFS-15714 created (now d820095)

2021-01-24 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch HDFS-15714
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at d820095  HADOOP-17478. Improve the description of 
hadoop.http.authentication.signature.secret.file (#2628)

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15635. ViewFileSystemOverloadScheme support specifying mount table loader imp through conf (#2389). Contributed by Junfan Zhang.

2020-11-19 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8fa699b  HDFS-15635. ViewFileSystemOverloadScheme support specifying 
mount table loader imp through conf (#2389). Contributed by Junfan Zhang.
8fa699b is described below

commit 8fa699b53fea8728e008c46af949f92543c08170
Author: zhang_jf 
AuthorDate: Fri Nov 20 12:21:16 2020 +0800

HDFS-15635. ViewFileSystemOverloadScheme support specifying mount table 
loader imp through conf (#2389). Contributed by Junfan Zhang.
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |  7 +
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 33 --
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 5c27692..8235e93 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -125,4 +125,11 @@ public interface Constants {
   "fs.viewfs.ignore.port.in.mount.table.name";
 
   boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
+
+  String CONFIG_VIEWFS_MOUNTTABLE_LOADER_IMPL =
+  CONFIG_VIEWFS_PREFIX + ".config.loader.impl";
+
+  Class
+  DEFAULT_MOUNT_TABLE_CONFIG_LOADER_IMPL =
+  HCFSMountTableConfigLoader.class;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 12877cc..773793b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
@@ -17,6 +17,10 @@
  */
 package org.apache.hadoop.fs.viewfs;
 
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNTTABLE_LOADER_IMPL;
+import static 
org.apache.hadoop.fs.viewfs.Constants.DEFAULT_MOUNT_TABLE_CONFIG_LOADER_IMPL;
+
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
@@ -30,8 +34,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
-
-import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import org.apache.hadoop.util.ReflectionUtils;
 
 /**
  *  This class is extended from the ViewFileSystem for the overloaded
@@ -160,7 +163,7 @@ public class ViewFileSystemOverloadScheme extends 
ViewFileSystem {
 
conf.getBoolean(Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
 true));
 if (null != mountTableConfigPath) {
-  MountTableConfigLoader loader = new HCFSMountTableConfigLoader();
+  MountTableConfigLoader loader = getMountTableConfigLoader(conf);
   loader.load(mountTableConfigPath, conf);
 } else {
   // TODO: Should we fail here.?
@@ -173,6 +176,30 @@ public class ViewFileSystemOverloadScheme extends 
ViewFileSystem {
 super.initialize(theUri, conf);
   }
 
+  private MountTableConfigLoader getMountTableConfigLoader(
+  final Configuration conf) {
+Class clazz =
+conf.getClass(CONFIG_VIEWFS_MOUNTTABLE_LOADER_IMPL,
+DEFAULT_MOUNT_TABLE_CONFIG_LOADER_IMPL,
+MountTableConfigLoader.class);
+
+if (clazz == null) {
+  throw new RuntimeException(
+  String.format("Errors on getting mount table loader class. "
+  + "The fs.viewfs.mounttable.config.loader.impl conf is %s. ",
+  conf.get(CONFIG_VIEWFS_MOUNTTABLE_LOADER_IMPL,
+  DEFAULT_MOUNT_TABLE_CONFIG_LOADER_IMPL.getName(;
+}
+
+try {
+  MountTableConfigLoader mountTableConfigLoader =
+  ReflectionUtils.newInstance(clazz, conf);
+  return mountTableConfigLoader;
+} catch (Exception e) {
+  throw new RuntimeException(e);
+}
+  }
+
   /**
* This method is overridden because in ViewFileSystemOverloadScheme if
* overloaded scheme matches with mounted target fs scheme, file system


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (2e46ef9 -> b76b36e)

2020-10-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 2e46ef9  MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing
 add b76b36e  HDFS-15625: Namenode trashEmptier should not init ViewFs on 
startup (#2378). Contributed by Uma Maheswara Rao G.

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java   | 5 -
 .../fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java  | 4 ++--
 2 files changed, 6 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (2e46ef9 -> b76b36e)

2020-10-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 2e46ef9  MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing
 add b76b36e  HDFS-15625: Namenode trashEmptier should not init ViewFs on 
startup (#2378). Contributed by Uma Maheswara Rao G.

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java   | 5 -
 .../fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java  | 4 ++--
 2 files changed, 6 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15598: ViewHDFS#canonicalizeUri should not be restricted to DFS only API. (#2339). Contributed by Uma Maheswara Rao G.

2020-09-25 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 899dea2  HDFS-15598: ViewHDFS#canonicalizeUri should not be restricted 
to DFS only API. (#2339). Contributed by Uma Maheswara Rao G.
899dea2 is described below

commit 899dea2a21d1016dff2bef9e22b9f9c7b908067f
Author: Uma Maheswara Rao G 
AuthorDate: Fri Sep 25 21:21:01 2020 -0700

HDFS-15598: ViewHDFS#canonicalizeUri should not be restricted to DFS only 
API. (#2339). Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
---
 .../apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java |  7 ++-
 .../org/apache/hadoop/hdfs/ViewDistributedFileSystem.java | 11 +--
 2 files changed, 7 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 5353e93..60d14d3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
@@ -97,7 +97,7 @@ import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN
  * the mount table name.
  * (3) If there are no mount links configured with the initializing uri's
  * hostname as the mount table name, then it will automatically consider the
- * current uri as fallback( ex: fs.viewfs.mounttable..linkFallBack)
+ * current uri as fallback( ex: fs.viewfs.mounttable..linkFallback)
  * target fs uri.
  */
 @InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
@@ -354,4 +354,9 @@ public class ViewFileSystemOverloadScheme extends 
ViewFileSystem {
 .getMyFs();
   }
 
+  @Override
+  @InterfaceAudience.LimitedPrivate("HDFS")
+  public URI canonicalizeUri(URI uri) {
+return super.canonicalizeUri(uri);
+  }
 }
\ No newline at end of file
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
index 2894a24..70ba886 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
@@ -1072,16 +1072,7 @@ public class ViewDistributedFileSystem extends 
DistributedFileSystem {
   return super.canonicalizeUri(uri);
 }
 
-ViewFileSystemOverloadScheme.MountPathInfo mountPathInfo = 
null;
-try {
-  mountPathInfo = this.vfs.getMountPathInfo(new Path(uri), getConf());
-} catch (IOException e) {
-  LOGGER.warn("Failed to resolve the uri as mount path", e);
-  return null;
-}
-checkDFS(mountPathInfo.getTargetFs(), "canonicalizeUri");
-return ((DistributedFileSystem) mountPathInfo.getTargetFs())
-.canonicalizeUri(uri);
+return vfs.canonicalizeUri(uri);
   }
 
   @Override


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15596: ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only. (#2333). Contributed

2020-09-24 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3ccc962  HDFS-15596: ViewHDFS#create(f, permission, cflags, 
bufferSize, replication, blockSize, progress, checksumOpt) should not be 
restricted to DFS only. (#2333). Contributed by Uma Maheswara Rao G.
3ccc962 is described below

commit 3ccc962b990f7f24e9b430b86da6f93be9ac554e
Author: Uma Maheswara Rao G 
AuthorDate: Thu Sep 24 07:07:48 2020 -0700

HDFS-15596: ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
blockSize, progress, checksumOpt) should not be restricted to DFS only. 
(#2333). Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
---
 .../java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java   | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
index 4fee963..2894a24 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
@@ -376,7 +376,6 @@ public class ViewDistributedFileSystem extends 
DistributedFileSystem {
   }
 
   @Override
-  //DFS specific API
   public FSDataOutputStream create(final Path f, final FsPermission permission,
   final EnumSet cflags, final int bufferSize,
   final short replication, final long blockSize,
@@ -387,12 +386,8 @@ public class ViewDistributedFileSystem extends 
DistributedFileSystem {
   .create(f, permission, cflags, bufferSize, replication, blockSize,
   progress, checksumOpt);
 }
-ViewFileSystemOverloadScheme.MountPathInfo mountPathInfo =
-this.vfs.getMountPathInfo(f, getConf());
-checkDFS(mountPathInfo.getTargetFs(), "create");
-return mountPathInfo.getTargetFs()
-.create(mountPathInfo.getPathOnTarget(), permission, cflags, 
bufferSize,
-replication, blockSize, progress, checksumOpt);
+return vfs.create(f, permission, cflags, bufferSize, replication, 
blockSize,
+progress, checksumOpt);
   }
 
   void checkDFS(FileSystem fs, String methodName) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). Contributed by Uma Maheswara Rao G.

2020-09-17 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 2d9c539  HDFS-15578: Fix the rename issues with fallback fs enabled 
(#2305). Contributed by Uma Maheswara Rao G.
2d9c539 is described below

commit 2d9c5395efe830d88dee22cd2020735730a4420d
Author: Uma Maheswara Rao G 
AuthorDate: Wed Sep 16 22:43:00 2020 -0700

HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). 
Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
(cherry picked from commit e4cb0d351450dba10cd6a0a6d999cc4423f1c2a9)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  24 +++--
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  52 +--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  59 +---
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java|   4 +-
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 101 +
 ...estViewDistributedFileSystemWithMountLinks.java |  95 ++-
 7 files changed, 307 insertions(+), 32 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index fceb73a..2a38693 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -706,19 +706,27 @@ abstract class InodeTree {
 final T targetFileSystem;
 final String resolvedPath;
 final Path remainingPath;   // to resolve in the target FileSystem
+private final boolean isLastInternalDirLink;
 
 ResolveResult(final ResultKind k, final T targetFs, final String resolveP,
-final Path remainingP) {
+final Path remainingP, boolean isLastIntenalDirLink) {
   kind = k;
   targetFileSystem = targetFs;
   resolvedPath = resolveP;
   remainingPath = remainingP;
+  this.isLastInternalDirLink = isLastIntenalDirLink;
 }
 
 // Internal dir path resolution completed within the mount table
 boolean isInternalDir() {
   return (kind == ResultKind.INTERNAL_DIR);
 }
+
+// Indicates whether the internal dir path resolution completed at the link
+// or resolved due to fallback.
+boolean isLastInternalDirLink() {
+  return this.isLastInternalDirLink;
+}
   }
 
   /**
@@ -737,7 +745,7 @@ abstract class InodeTree {
   getRootDir().getInternalDirFs()
   : getRootLink().getTargetFileSystem();
   resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-  targetFs, root.fullPath, SlashPath);
+  targetFs, root.fullPath, SlashPath, false);
   return resolveResult;
 }
 
@@ -755,7 +763,8 @@ abstract class InodeTree {
   }
   remainingPath = new Path(remainingPathStr.toString());
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath);
+  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath,
+  true);
   return resolveResult;
 }
 Preconditions.checkState(root.isInternalDir());
@@ -775,7 +784,7 @@ abstract class InodeTree {
 if (hasFallbackLink()) {
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
   getRootFallbackLink().getTargetFileSystem(), root.fullPath,
-  new Path(p));
+  new Path(p), false);
   return resolveResult;
 } else {
   StringBuilder failedAt = new StringBuilder(path[0]);
@@ -801,7 +810,8 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-link.getTargetFileSystem(), nextInode.fullPath, remainingPath);
+link.getTargetFileSystem(), nextInode.fullPath, remainingPath,
+true);
 return resolveResult;
   } else if (nextInode.isInternalDir()) {
 curInode = (INodeDir) nextInode;
@@ -824,7 +834,7 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-curInode.getInternalDirFs(), curInode.fullPath, remainingPath);
+curInode.getInternalDirFs(), curInode.fullPath, remainingPath, false);
 return resolveResult;
   }
 
@@ -874,7 +884,7 @@ abstract class InodeTree {
   T targetFs = getTargetFileSystem(
   new URI(targetOfResolvedPathStr));
   return new ResolveResult(resultKind, targetFs, resolvedPathStr,
-  remainingPath

[hadoop] branch trunk updated: HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). Contributed by Uma Maheswara Rao G.

2020-09-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e4cb0d3  HDFS-15578: Fix the rename issues with fallback fs enabled 
(#2305). Contributed by Uma Maheswara Rao G.
e4cb0d3 is described below

commit e4cb0d351450dba10cd6a0a6d999cc4423f1c2a9
Author: Uma Maheswara Rao G 
AuthorDate: Wed Sep 16 22:43:00 2020 -0700

HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). 
Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  24 +++--
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  52 +--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  59 +---
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java|   4 +-
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 101 +
 ...estViewDistributedFileSystemWithMountLinks.java |  95 ++-
 7 files changed, 307 insertions(+), 32 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index fceb73a..2a38693 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -706,19 +706,27 @@ abstract class InodeTree {
 final T targetFileSystem;
 final String resolvedPath;
 final Path remainingPath;   // to resolve in the target FileSystem
+private final boolean isLastInternalDirLink;
 
 ResolveResult(final ResultKind k, final T targetFs, final String resolveP,
-final Path remainingP) {
+final Path remainingP, boolean isLastIntenalDirLink) {
   kind = k;
   targetFileSystem = targetFs;
   resolvedPath = resolveP;
   remainingPath = remainingP;
+  this.isLastInternalDirLink = isLastIntenalDirLink;
 }
 
 // Internal dir path resolution completed within the mount table
 boolean isInternalDir() {
   return (kind == ResultKind.INTERNAL_DIR);
 }
+
+// Indicates whether the internal dir path resolution completed at the link
+// or resolved due to fallback.
+boolean isLastInternalDirLink() {
+  return this.isLastInternalDirLink;
+}
   }
 
   /**
@@ -737,7 +745,7 @@ abstract class InodeTree {
   getRootDir().getInternalDirFs()
   : getRootLink().getTargetFileSystem();
   resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-  targetFs, root.fullPath, SlashPath);
+  targetFs, root.fullPath, SlashPath, false);
   return resolveResult;
 }
 
@@ -755,7 +763,8 @@ abstract class InodeTree {
   }
   remainingPath = new Path(remainingPathStr.toString());
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath);
+  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath,
+  true);
   return resolveResult;
 }
 Preconditions.checkState(root.isInternalDir());
@@ -775,7 +784,7 @@ abstract class InodeTree {
 if (hasFallbackLink()) {
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
   getRootFallbackLink().getTargetFileSystem(), root.fullPath,
-  new Path(p));
+  new Path(p), false);
   return resolveResult;
 } else {
   StringBuilder failedAt = new StringBuilder(path[0]);
@@ -801,7 +810,8 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-link.getTargetFileSystem(), nextInode.fullPath, remainingPath);
+link.getTargetFileSystem(), nextInode.fullPath, remainingPath,
+true);
 return resolveResult;
   } else if (nextInode.isInternalDir()) {
 curInode = (INodeDir) nextInode;
@@ -824,7 +834,7 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-curInode.getInternalDirFs(), curInode.fullPath, remainingPath);
+curInode.getInternalDirFs(), curInode.fullPath, remainingPath, false);
 return resolveResult;
   }
 
@@ -874,7 +884,7 @@ abstract class InodeTree {
   T targetFs = getTargetFileSystem(
   new URI(targetOfResolvedPathStr));
   return new ResolveResult(resultKind, targetFs, resolvedPathStr,
-  remainingPath);
+  remainingPath, true);
 } catch (IOException ex) {
   LOGGER.error

[hadoop] branch trunk updated: HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). Contributed by Uma Maheswara Rao G.

2020-09-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e4cb0d3  HDFS-15578: Fix the rename issues with fallback fs enabled 
(#2305). Contributed by Uma Maheswara Rao G.
e4cb0d3 is described below

commit e4cb0d351450dba10cd6a0a6d999cc4423f1c2a9
Author: Uma Maheswara Rao G 
AuthorDate: Wed Sep 16 22:43:00 2020 -0700

HDFS-15578: Fix the rename issues with fallback fs enabled (#2305). 
Contributed by Uma Maheswara Rao G.

Co-authored-by: Uma Maheswara Rao G 
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  24 +++--
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  52 +--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  59 +---
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java|   4 +-
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 101 +
 ...estViewDistributedFileSystemWithMountLinks.java |  95 ++-
 7 files changed, 307 insertions(+), 32 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index fceb73a..2a38693 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -706,19 +706,27 @@ abstract class InodeTree {
 final T targetFileSystem;
 final String resolvedPath;
 final Path remainingPath;   // to resolve in the target FileSystem
+private final boolean isLastInternalDirLink;
 
 ResolveResult(final ResultKind k, final T targetFs, final String resolveP,
-final Path remainingP) {
+final Path remainingP, boolean isLastIntenalDirLink) {
   kind = k;
   targetFileSystem = targetFs;
   resolvedPath = resolveP;
   remainingPath = remainingP;
+  this.isLastInternalDirLink = isLastIntenalDirLink;
 }
 
 // Internal dir path resolution completed within the mount table
 boolean isInternalDir() {
   return (kind == ResultKind.INTERNAL_DIR);
 }
+
+// Indicates whether the internal dir path resolution completed at the link
+// or resolved due to fallback.
+boolean isLastInternalDirLink() {
+  return this.isLastInternalDirLink;
+}
   }
 
   /**
@@ -737,7 +745,7 @@ abstract class InodeTree {
   getRootDir().getInternalDirFs()
   : getRootLink().getTargetFileSystem();
   resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-  targetFs, root.fullPath, SlashPath);
+  targetFs, root.fullPath, SlashPath, false);
   return resolveResult;
 }
 
@@ -755,7 +763,8 @@ abstract class InodeTree {
   }
   remainingPath = new Path(remainingPathStr.toString());
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath);
+  getRootLink().getTargetFileSystem(), root.fullPath, remainingPath,
+  true);
   return resolveResult;
 }
 Preconditions.checkState(root.isInternalDir());
@@ -775,7 +784,7 @@ abstract class InodeTree {
 if (hasFallbackLink()) {
   resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
   getRootFallbackLink().getTargetFileSystem(), root.fullPath,
-  new Path(p));
+  new Path(p), false);
   return resolveResult;
 } else {
   StringBuilder failedAt = new StringBuilder(path[0]);
@@ -801,7 +810,8 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR,
-link.getTargetFileSystem(), nextInode.fullPath, remainingPath);
+link.getTargetFileSystem(), nextInode.fullPath, remainingPath,
+true);
 return resolveResult;
   } else if (nextInode.isInternalDir()) {
 curInode = (INodeDir) nextInode;
@@ -824,7 +834,7 @@ abstract class InodeTree {
   remainingPath = new Path(remainingPathStr.toString());
 }
 resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR,
-curInode.getInternalDirFs(), curInode.fullPath, remainingPath);
+curInode.getInternalDirFs(), curInode.fullPath, remainingPath, false);
 return resolveResult;
   }
 
@@ -874,7 +884,7 @@ abstract class InodeTree {
   T targetFs = getTargetFileSystem(
   new URI(targetOfResolvedPathStr));
   return new ResolveResult(resultKind, targetFs, resolvedPathStr,
-  remainingPath);
+  remainingPath, true);
 } catch (IOException ex) {
   LOGGER.error

[hadoop] branch branch-3.3 updated: HDFS-15529: getChildFilesystems should include fallback fs as well (#2234). Contributed by Uma Maheswara Rao G.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 1195dac  HDFS-15529: getChildFilesystems should include fallback fs as 
well (#2234). Contributed by Uma Maheswara Rao G.
1195dac is described below

commit 1195dac55e995eeea22cded88be602030c09cf2d
Author: Uma Maheswara Rao G 
AuthorDate: Thu Sep 3 11:06:20 2020 -0700

HDFS-15529: getChildFilesystems should include fallback fs as well (#2234). 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit b3660d014708de3d0fb04c9c152934f6020a65ae)
---
 .../main/java/org/apache/hadoop/fs/viewfs/InodeTree.java |  9 +
 .../java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java |  6 ++
 .../TestViewFileSystemOverloadSchemeWithHdfsScheme.java  | 16 +---
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index dbcd9b4..fceb73a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -408,6 +408,15 @@ abstract class InodeTree {
 return rootFallbackLink != null;
   }
 
+  /**
+   * @return true if the root represented as internalDir. In LinkMergeSlash,
+   * there will be root to root mapping. So, root does not represent as
+   * internalDir.
+   */
+  protected boolean isRootInternalDir() {
+return root.isInternalDir();
+  }
+
   protected INodeLink getRootFallbackLink() {
 Preconditions.checkState(root.isInternalDir());
 return rootFallbackLink;
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 0e190a3..b906996 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -959,6 +959,12 @@ public class ViewFileSystem extends FileSystem {
   FileSystem targetFs = mountPoint.target.targetFileSystem;
   children.addAll(Arrays.asList(targetFs.getChildFileSystems()));
 }
+
+if (fsState.isRootInternalDir() && fsState.getRootFallbackLink() != null) {
+  children.addAll(Arrays.asList(
+  fsState.getRootFallbackLink().targetFileSystem
+  .getChildFileSystems()));
+}
 return children.toArray(new FileSystem[]{});
   }
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
index 31674f8..9a858e1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
@@ -476,10 +476,18 @@ public class 
TestViewFileSystemOverloadSchemeWithHdfsScheme {
 // 2. Two hdfs file systems should be there if no cache.
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 try (FileSystem vfs = FileSystem.get(conf)) {
-  Assert.assertEquals(2, vfs.getChildFileSystems().length);
+  Assert.assertEquals(isFallBackExist(conf) ? 3 : 2,
+  vfs.getChildFileSystems().length);
 }
   }
 
+  // HDFS-15529: if any extended tests added fallback, then getChildFileSystems
+  // will include fallback as well.
+  private boolean isFallBackExist(Configuration config) {
+return config.get(ConfigUtil.getConfigViewFsPrefix(defaultFSURI
+.getAuthority()) + "." + Constants.CONFIG_VIEWFS_LINK_FALLBACK) != 
null;
+  }
+
   /**
* Create mount links as follows
* hdfs://localhost:xxx/HDFSUser0 --> hdfs://localhost:xxx/HDFSUser/
@@ -501,7 +509,8 @@ public class TestViewFileSystemOverloadSchemeWithHdfsScheme 
{
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 // Two hdfs file systems should be there if no cache.
 try (FileSystem vfs = FileSystem.get(conf)) {
-  Assert.assertEquals(2, vfs.getChildFileSystems().length);
+  Assert.assertEquals(isFallBackExist(conf) ? 3 : 2,
+  vfs.getChildFileSystems().length);
 }
   }
 
@@ -528,7 +537,8 @@ public class TestViewFileSystemOverloadSchemeWithHdfsScheme 
{
 // cache should work.
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CA

[hadoop] branch branch-3.3 updated: HADOOP-15891. provide Regex Based Mount Point In Inode Tree (#2185). Contributed by Zhenzhao Wang.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 2d5ca83  HADOOP-15891. provide Regex Based Mount Point In Inode Tree 
(#2185). Contributed by Zhenzhao Wang.
2d5ca83 is described below

commit 2d5ca830782016069ce17bd86a1018baa55de148
Author: zz <40777829+johnzzgit...@users.noreply.github.com>
AuthorDate: Thu Sep 10 21:20:32 2020 -0700

HADOOP-15891. provide Regex Based Mount Point In Inode Tree (#2185). 
Contributed by Zhenzhao Wang.

Co-authored-by: Zhenzhao Wang 
(cherry picked from commit 12a316cdf9994feaa36c3ff7d13e67d70398a9f3)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  22 +
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 +
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 340 ++-
 .../apache/hadoop/fs/viewfs/RegexMountPoint.java   | 289 +
 .../fs/viewfs/RegexMountPointInterceptor.java  |  70 
 .../viewfs/RegexMountPointInterceptorFactory.java  |  67 +++
 .../fs/viewfs/RegexMountPointInterceptorType.java  |  53 +++
 ...ountPointResolvedDstPathReplaceInterceptor.java | 137 ++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  55 ++-
 .../hadoop/fs/viewfs/TestRegexMountPoint.java  | 160 +++
 .../TestRegexMountPointInterceptorFactory.java |  54 +++
 ...ountPointResolvedDstPathReplaceInterceptor.java | 101 +
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|  63 +++
 .../fs/viewfs/TestViewFileSystemLinkRegex.java | 462 +
 14 files changed, 1765 insertions(+), 116 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 7d29b8f..09ec5d2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -167,6 +167,28 @@ public class ConfigUtil {
   }
 
   /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   * @param interceptorSettings - the serialized interceptor string to be
+   *applied while resolving the mapping
+   */
+  public static void addLinkRegex(
+  Configuration conf, final String mountTableName, final String srcRegex,
+  final String targetStr, final String interceptorSettings) {
+String prefix = getConfigViewFsPrefix(mountTableName) + "."
++ Constants.CONFIG_VIEWFS_LINK_REGEX + ".";
+if ((interceptorSettings != null) && (!interceptorSettings.isEmpty())) {
+  prefix = prefix + interceptorSettings
+  + RegexMountPoint.SETTING_SRCREGEX_SEP;
+}
+String key = prefix + srcRegex;
+conf.set(key, targetStr);
+  }
+
+  /**
* Add config variable for homedir for default mount table
* @param conf - add to this conf
* @param homedir - the home dir path starting with slash
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 492cb87..bf9f7db 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -86,6 +86,14 @@ public interface Constants {
*/
   String CONFIG_VIEWFS_LINK_MERGE_SLASH = "linkMergeSlash";
 
+  /**
+   * Config variable for specifying a regex link which uses regular expressions
+   * as source and target could use group captured in src.
+   * E.g. (^/(?\\w+), /prefix-${firstDir}) =>
+   *   (/path1/file1 => /prefix-path1/file1)
+   */
+  String CONFIG_VIEWFS_LINK_REGEX = "linkRegex";
+
   FsPermission PERMISSION_555 = new FsPermission((short) 0555);
 
   String CONFIG_VIEWFS_RENAME_STRATEGY = "fs.viewfs.rename.strategy";
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 003694f..dbcd9b4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -39,6 +39,8 @@ import org.apache.hadoop.fs.Path

[hadoop] branch branch-3.3 updated: HDFS-15532: listFiles on root/InternalDir will fail if fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new bfa145d  HDFS-15532: listFiles on root/InternalDir will fail if 
fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.
bfa145d is described below

commit bfa145dd7ca567438e7dacc1dc92ef39ee674164
Author: Uma Maheswara Rao G 
AuthorDate: Sat Sep 12 17:06:39 2020 -0700

HDFS-15532: listFiles on root/InternalDir will fail if fallback root has 
file. (#2298). Contributed by Uma Maheswara Rao G.

(cherry picked from commit d2779de3f525f58790cbd6c9e3c265a9767d1d0c)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 17 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 15 ++
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 24 ++
 .../TestViewDistributedFileSystemContract.java |  6 --
 4 files changed, 56 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index f981af8..0e190a3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1277,6 +1277,23 @@ public class ViewFileSystem extends FileSystem {
 public BlockLocation[] getFileBlockLocations(final FileStatus fs,
 final long start, final long len) throws
 FileNotFoundException, IOException {
+
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(fs.getPath()) && this.fsState
+  .getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, fs.getPath().getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
+
   checkPathIsSlash(fs.getPath());
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 95b596b..a6ce33a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -981,6 +981,21 @@ public class ViewFs extends AbstractFileSystem {
 @Override
 public BlockLocation[] getFileBlockLocations(final Path f, final long 
start,
 final long len) throws FileNotFoundException, IOException {
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(f) && this.fsState
+  .getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, f.getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
   checkPathIsSlash(f);
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
index 04d26b9..dc2eb0e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
@@ -18,6 +18,7 @@

[hadoop] branch trunk updated: HDFS-15532: listFiles on root/InternalDir will fail if fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.

2020-09-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d2779de  HDFS-15532: listFiles on root/InternalDir will fail if 
fallback root has file. (#2298). Contributed by Uma Maheswara Rao G.
d2779de is described below

commit d2779de3f525f58790cbd6c9e3c265a9767d1d0c
Author: Uma Maheswara Rao G 
AuthorDate: Sat Sep 12 17:06:39 2020 -0700

HDFS-15532: listFiles on root/InternalDir will fail if fallback root has 
file. (#2298). Contributed by Uma Maheswara Rao G.
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 17 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 15 ++
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 24 ++
 .../TestViewDistributedFileSystemContract.java |  6 --
 4 files changed, 56 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index c7ed15b..b906996 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1283,6 +1283,23 @@ public class ViewFileSystem extends FileSystem {
 public BlockLocation[] getFileBlockLocations(final FileStatus fs,
 final long start, final long len) throws
 FileNotFoundException, IOException {
+
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(fs.getPath()) && this.fsState
+  .getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, fs.getPath().getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
+
   checkPathIsSlash(fs.getPath());
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 95b596b..a6ce33a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -981,6 +981,21 @@ public class ViewFs extends AbstractFileSystem {
 @Override
 public BlockLocation[] getFileBlockLocations(final Path f, final long 
start,
 final long len) throws FileNotFoundException, IOException {
+  // When application calls listFiles on internalDir, it would return
+  // RemoteIterator from InternalDirOfViewFs. If there is a fallBack, there
+  // is a chance of files exists under that internalDir in fallback.
+  // Iterator#next will call getFileBlockLocations with that files. So, we
+  // should return getFileBlockLocations on fallback. See HDFS-15532.
+  if (!InodeTree.SlashPath.equals(f) && this.fsState
+  .getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+Path pathToFallbackFs = new Path(parent, f.getName());
+return linkedFallbackFs
+.getFileBlockLocations(pathToFallbackFs, start, len);
+  }
   checkPathIsSlash(f);
   throw new FileNotFoundException("Path points to dir not a file");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
index 04d26b9..dc2eb0e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsLinkFallback.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.fs.viewfs;
 
 import static org.apache.hadoop.

[hadoop] branch trunk updated (9960c01 -> 12a316c)

2020-09-10 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 9960c01  HADOOP-17244. S3A directory delete tombstones dir markers 
prematurely. (#2280)
 add 12a316c  HADOOP-15891. provide Regex Based Mount Point In Inode Tree 
(#2185). Contributed by Zhenzhao Wang.

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  22 +
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 +
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 340 ++-
 .../apache/hadoop/fs/viewfs/RegexMountPoint.java   | 289 +
 .../fs/viewfs/RegexMountPointInterceptor.java  |  70 
 .../viewfs/RegexMountPointInterceptorFactory.java  |  67 +++
 .../fs/viewfs/RegexMountPointInterceptorType.java  |  53 +++
 ...ountPointResolvedDstPathReplaceInterceptor.java | 137 ++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  55 ++-
 .../hadoop/fs/viewfs/TestRegexMountPoint.java  | 160 +++
 .../TestRegexMountPointInterceptorFactory.java |  54 +++
 ...ountPointResolvedDstPathReplaceInterceptor.java | 101 +
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|  63 +++
 .../fs/viewfs/TestViewFileSystemLinkRegex.java | 462 +
 14 files changed, 1765 insertions(+), 116 deletions(-)
 create mode 100644 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/RegexMountPoint.java
 create mode 100644 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/RegexMountPointInterceptor.java
 create mode 100644 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/RegexMountPointInterceptorFactory.java
 create mode 100644 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/RegexMountPointInterceptorType.java
 create mode 100644 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/RegexMountPointResolvedDstPathReplaceInterceptor.java
 create mode 100644 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestRegexMountPoint.java
 create mode 100644 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestRegexMountPointInterceptorFactory.java
 create mode 100644 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestRegexMountPointResolvedDstPathReplaceInterceptor.java
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkRegex.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15558: ViewDistributedFileSystem#recoverLease should call super.recoverLease when there are no mounts configured (#2275) Contributed by Uma Maheswara Rao G.

2020-09-07 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ac7d462  HDFS-15558: ViewDistributedFileSystem#recoverLease should 
call super.recoverLease when there are no mounts configured (#2275) Contributed 
by Uma Maheswara Rao G.
ac7d462 is described below

commit ac7d4623aefe9d9bd819ff6fbffae15ad983c5f3
Author: Uma Maheswara Rao G 
AuthorDate: Mon Sep 7 11:36:13 2020 -0700

HDFS-15558: ViewDistributedFileSystem#recoverLease should call 
super.recoverLease when there are no mounts configured (#2275) Contributed by 
Uma Maheswara Rao G.
---
 .../hadoop/hdfs/ViewDistributedFileSystem.java |  7 +++
 .../org/apache/hadoop/hdfs/TestLeaseRecovery.java  | 16 ++-
 .../hadoop/hdfs/TestViewDistributedFileSystem.java | 23 ++
 3 files changed, 45 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
index 0a68169..1afb5d9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
@@ -266,6 +266,10 @@ public class ViewDistributedFileSystem extends 
DistributedFileSystem {
 
   @Override
   public boolean recoverLease(final Path f) throws IOException {
+if (this.vfs == null) {
+  return super.recoverLease(f);
+}
+
 ViewFileSystemOverloadScheme.MountPathInfo mountPathInfo =
 this.vfs.getMountPathInfo(f, getConf());
 checkDFS(mountPathInfo.getTargetFs(), "recoverLease");
@@ -286,6 +290,9 @@ public class ViewDistributedFileSystem extends 
DistributedFileSystem {
   @Override
   public FSDataInputStream open(PathHandle fd, int bufferSize)
   throws IOException {
+if (this.vfs == null) {
+  return super.open(fd, bufferSize);
+}
 return this.vfs.open(fd, bufferSize);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
index a1cce3e..399aa1e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
@@ -280,8 +280,22 @@ public class TestLeaseRecovery {
*/
   @Test
   public void testLeaseRecoveryAndAppend() throws Exception {
+testLeaseRecoveryAndAppend(new Configuration());
+  }
+
+  /**
+   * Recover the lease on a file and append file from another client with
+   * ViewDFS enabled.
+   */
+  @Test
+  public void testLeaseRecoveryAndAppendWithViewDFS() throws Exception {
 Configuration conf = new Configuration();
-try{
+conf.set("fs.hdfs.impl", ViewDistributedFileSystem.class.getName());
+testLeaseRecoveryAndAppend(conf);
+  }
+
+  private void testLeaseRecoveryAndAppend(Configuration conf) throws Exception 
{
+try {
 cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
 Path file = new Path("/testLeaseRecovery");
 DistributedFileSystem dfs = cluster.getFileSystem();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
index 3c5a0be..0ba0841 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
@@ -17,9 +17,13 @@
  */
 package org.apache.hadoop.hdfs;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathHandle;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.test.Whitebox;
+import org.junit.Test;
 
 import java.io.IOException;
 
@@ -44,4 +48,23 @@ public class TestViewDistributedFileSystem extends 
TestDistributedFileSystem{
 data.set(null);
 super.testStatistics();
   }
+
+  @Test
+  public void testOpenWithPathHandle() throws Exception {
+Configuration conf = getTestConfiguration();
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+  FileSystem fileSys = cluster.getFileSystem();
+  Path openTestPath = new Path("/testOpen");
+  fileSys.create(openTestPath).close();
+  PathHandle pathH

[hadoop] branch trunk updated: HDFS-15529: getChildFilesystems should include fallback fs as well (#2234). Contributed by Uma Maheswara Rao G.

2020-09-03 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b3660d0  HDFS-15529: getChildFilesystems should include fallback fs as 
well (#2234). Contributed by Uma Maheswara Rao G.
b3660d0 is described below

commit b3660d014708de3d0fb04c9c152934f6020a65ae
Author: Uma Maheswara Rao G 
AuthorDate: Thu Sep 3 11:06:20 2020 -0700

HDFS-15529: getChildFilesystems should include fallback fs as well (#2234). 
Contributed by Uma Maheswara Rao G.
---
 .../main/java/org/apache/hadoop/fs/viewfs/InodeTree.java |  9 +
 .../java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java |  6 ++
 .../TestViewFileSystemOverloadSchemeWithHdfsScheme.java  | 16 +---
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 003694f..1f1adea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -394,6 +394,15 @@ abstract class InodeTree {
 return rootFallbackLink != null;
   }
 
+  /**
+   * @return true if the root represented as internalDir. In LinkMergeSlash,
+   * there will be root to root mapping. So, root does not represent as
+   * internalDir.
+   */
+  protected boolean isRootInternalDir() {
+return root.isInternalDir();
+  }
+
   protected INodeLink getRootFallbackLink() {
 Preconditions.checkState(root.isInternalDir());
 return rootFallbackLink;
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 8c659d1..1ba91b5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -939,6 +939,12 @@ public class ViewFileSystem extends FileSystem {
   FileSystem targetFs = mountPoint.target.targetFileSystem;
   children.addAll(Arrays.asList(targetFs.getChildFileSystems()));
 }
+
+if (fsState.isRootInternalDir() && fsState.getRootFallbackLink() != null) {
+  children.addAll(Arrays.asList(
+  fsState.getRootFallbackLink().targetFileSystem
+  .getChildFileSystems()));
+}
 return children.toArray(new FileSystem[]{});
   }
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
index 31674f8..9a858e1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java
@@ -476,10 +476,18 @@ public class 
TestViewFileSystemOverloadSchemeWithHdfsScheme {
 // 2. Two hdfs file systems should be there if no cache.
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 try (FileSystem vfs = FileSystem.get(conf)) {
-  Assert.assertEquals(2, vfs.getChildFileSystems().length);
+  Assert.assertEquals(isFallBackExist(conf) ? 3 : 2,
+  vfs.getChildFileSystems().length);
 }
   }
 
+  // HDFS-15529: if any extended tests added fallback, then getChildFileSystems
+  // will include fallback as well.
+  private boolean isFallBackExist(Configuration config) {
+return config.get(ConfigUtil.getConfigViewFsPrefix(defaultFSURI
+.getAuthority()) + "." + Constants.CONFIG_VIEWFS_LINK_FALLBACK) != 
null;
+  }
+
   /**
* Create mount links as follows
* hdfs://localhost:xxx/HDFSUser0 --> hdfs://localhost:xxx/HDFSUser/
@@ -501,7 +509,8 @@ public class TestViewFileSystemOverloadSchemeWithHdfsScheme 
{
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 // Two hdfs file systems should be there if no cache.
 try (FileSystem vfs = FileSystem.get(conf)) {
-  Assert.assertEquals(2, vfs.getChildFileSystems().length);
+  Assert.assertEquals(isFallBackExist(conf) ? 3 : 2,
+  vfs.getChildFileSystems().length);
 }
   }
 
@@ -528,7 +537,8 @@ public class TestViewFileSystemOverloadSchemeWithHdfsScheme 
{
 // cache should work.
 conf.setBoolean(Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE, false);
 try (FileSystem vfs = FileSystem.get(conf)) {
-  

[hadoop] branch branch-3.2 updated: HDFS-14096. [SPS] : Add Support for Storage Policy Satisfier in ViewFs. Contributed by Ayush Saxena.

2020-08-26 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 3bacea2  HDFS-14096. [SPS] : Add Support for Storage Policy Satisfier 
in ViewFs. Contributed by Ayush Saxena.
3bacea2 is described below

commit 3bacea2e5ebd330964bfca4c1064d8f07d09112d
Author: Surendra Singh Lilhore 
AuthorDate: Mon Dec 17 11:24:57 2018 +0530

HDFS-14096. [SPS] : Add Support for Storage Policy Satisfier in ViewFs. 
Contributed by Ayush Saxena.

(cherry picked from commit 788e7473a404fa074b3af522416ee3d2fae865a0)
---
 .../java/org/apache/hadoop/fs/AbstractFileSystem.java  | 10 ++
 .../main/java/org/apache/hadoop/fs/FileContext.java| 18 ++
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java | 10 ++
 .../java/org/apache/hadoop/fs/FilterFileSystem.java|  5 +
 .../src/main/java/org/apache/hadoop/fs/FilterFs.java   |  5 +
 .../apache/hadoop/fs/viewfs/ChRootedFileSystem.java|  5 +
 .../java/org/apache/hadoop/fs/viewfs/ChRootedFs.java   |  5 +
 .../org/apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +
 .../main/java/org/apache/hadoop/fs/viewfs/ViewFs.java  | 12 
 .../java/org/apache/hadoop/fs/TestHarFileSystem.java   |  2 ++
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |  5 +
 .../src/main/java/org/apache/hadoop/fs/Hdfs.java   |  5 +
 .../org/apache/hadoop/hdfs/DistributedFileSystem.java  |  6 +-
 13 files changed, 96 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
index c7b21fc..9926a74 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
@@ -1250,6 +1250,16 @@ public abstract class AbstractFileSystem implements 
PathCapabilities {
   }
 
   /**
+   * Set the source path to satisfy storage policy.
+   * @param path The source path referring to either a directory or a file.
+   * @throws IOException
+   */
+  public void satisfyStoragePolicy(final Path path) throws IOException {
+throw new UnsupportedOperationException(
+getClass().getSimpleName() + " doesn't support satisfyStoragePolicy");
+  }
+
+  /**
* Set the storage policy for a given file or directory.
*
* @param path file or directory path.
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index ace892d..4357c88 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -2778,6 +2778,24 @@ public class FileContext implements PathCapabilities {
   }
 
   /**
+   * Set the source path to satisfy storage policy.
+   * @param path The source path referring to either a directory or a file.
+   * @throws IOException
+   */
+  public void satisfyStoragePolicy(final Path path)
+  throws IOException {
+final Path absF = fixRelativePart(path);
+new FSLinkResolver() {
+  @Override
+  public Void next(final AbstractFileSystem fs, final Path p)
+  throws IOException {
+fs.satisfyStoragePolicy(path);
+return null;
+  }
+}.resolve(this, absF);
+  }
+
+  /**
* Set the storage policy for a given file or directory.
*
* @param path file or directory path.
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index bac398b..22586b2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -3116,6 +3116,16 @@ public abstract class FileSystem extends Configured
   }
 
   /**
+   * Set the source path to satisfy storage policy.
+   * @param path The source path referring to either a directory or a file.
+   * @throws IOException
+   */
+  public void satisfyStoragePolicy(final Path path) throws IOException {
+throw new UnsupportedOperationException(
+getClass().getSimpleName() + " doesn't support setStoragePolicy");
+  }
+
+  /**
* Set the storage policy for a given file or directory.
*
* @param src file or directory path.
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
 
b/hadoop-common-p

[hadoop] branch branch-3.2 updated: HDFS-14096. [SPS] : Add Support for Storage Policy Satisfier in ViewFs. Contributed by Ayush Saxena.

2020-08-26 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 3bacea2  HDFS-14096. [SPS] : Add Support for Storage Policy Satisfier 
in ViewFs. Contributed by Ayush Saxena.
3bacea2 is described below

commit 3bacea2e5ebd330964bfca4c1064d8f07d09112d
Author: Surendra Singh Lilhore 
AuthorDate: Mon Dec 17 11:24:57 2018 +0530

HDFS-14096. [SPS] : Add Support for Storage Policy Satisfier in ViewFs. 
Contributed by Ayush Saxena.

(cherry picked from commit 788e7473a404fa074b3af522416ee3d2fae865a0)
---
 .../java/org/apache/hadoop/fs/AbstractFileSystem.java  | 10 ++
 .../main/java/org/apache/hadoop/fs/FileContext.java| 18 ++
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java | 10 ++
 .../java/org/apache/hadoop/fs/FilterFileSystem.java|  5 +
 .../src/main/java/org/apache/hadoop/fs/FilterFs.java   |  5 +
 .../apache/hadoop/fs/viewfs/ChRootedFileSystem.java|  5 +
 .../java/org/apache/hadoop/fs/viewfs/ChRootedFs.java   |  5 +
 .../org/apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +
 .../main/java/org/apache/hadoop/fs/viewfs/ViewFs.java  | 12 
 .../java/org/apache/hadoop/fs/TestHarFileSystem.java   |  2 ++
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |  5 +
 .../src/main/java/org/apache/hadoop/fs/Hdfs.java   |  5 +
 .../org/apache/hadoop/hdfs/DistributedFileSystem.java  |  6 +-
 13 files changed, 96 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
index c7b21fc..9926a74 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
@@ -1250,6 +1250,16 @@ public abstract class AbstractFileSystem implements 
PathCapabilities {
   }
 
   /**
+   * Set the source path to satisfy storage policy.
+   * @param path The source path referring to either a directory or a file.
+   * @throws IOException
+   */
+  public void satisfyStoragePolicy(final Path path) throws IOException {
+throw new UnsupportedOperationException(
+getClass().getSimpleName() + " doesn't support satisfyStoragePolicy");
+  }
+
+  /**
* Set the storage policy for a given file or directory.
*
* @param path file or directory path.
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index ace892d..4357c88 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -2778,6 +2778,24 @@ public class FileContext implements PathCapabilities {
   }
 
   /**
+   * Set the source path to satisfy storage policy.
+   * @param path The source path referring to either a directory or a file.
+   * @throws IOException
+   */
+  public void satisfyStoragePolicy(final Path path)
+  throws IOException {
+final Path absF = fixRelativePart(path);
+new FSLinkResolver() {
+  @Override
+  public Void next(final AbstractFileSystem fs, final Path p)
+  throws IOException {
+fs.satisfyStoragePolicy(path);
+return null;
+  }
+}.resolve(this, absF);
+  }
+
+  /**
* Set the storage policy for a given file or directory.
*
* @param path file or directory path.
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index bac398b..22586b2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -3116,6 +3116,16 @@ public abstract class FileSystem extends Configured
   }
 
   /**
+   * Set the source path to satisfy storage policy.
+   * @param path The source path referring to either a directory or a file.
+   * @throws IOException
+   */
+  public void satisfyStoragePolicy(final Path path) throws IOException {
+throw new UnsupportedOperationException(
+getClass().getSimpleName() + " doesn't support setStoragePolicy");
+  }
+
+  /**
* Set the storage policy for a given file or directory.
*
* @param src file or directory path.
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
 
b/hadoop-common-p

[hadoop] branch branch-3.2 updated: HDFS-8631. WebHDFS : Support setQuota. Contributed by Chao Sun. (Backported)

2020-08-26 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 0512b27  HDFS-8631. WebHDFS : Support setQuota. Contributed by Chao 
Sun. (Backported)
0512b27 is described below

commit 0512b27172b87929428eeac5956dbb2cae4f2a09
Author: Uma Maheswara Rao G 
AuthorDate: Wed Aug 26 09:33:51 2020 -0700

HDFS-8631. WebHDFS : Support setQuota. Contributed by Chao Sun. (Backported)
---
 .../main/java/org/apache/hadoop/fs/FileSystem.java | 43 +++
 .../org/apache/hadoop/fs/TestFilterFileSystem.java |  2 +
 .../org/apache/hadoop/fs/TestHarFileSystem.java|  2 +
 .../apache/hadoop/hdfs/DistributedFileSystem.java  |  2 +
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  | 43 +++
 .../hdfs/web/resources/NameSpaceQuotaParam.java| 44 +++
 .../hadoop/hdfs/web/resources/PutOpParam.java  |  3 +
 .../hdfs/web/resources/StorageSpaceQuotaParam.java | 45 +++
 .../hdfs/web/resources/StorageTypeParam.java   | 37 +
 .../federation/router/RouterWebHdfsMethods.java| 10 +++-
 .../web/resources/NamenodeWebHdfsMethods.java  | 51 ++---
 .../hadoop-hdfs/src/site/markdown/WebHDFS.md   | 64 ++
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java| 58 
 .../hadoop/hdfs/web/resources/TestParam.java   | 29 ++
 14 files changed, 423 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index ba892ed..bac398b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -1778,6 +1778,33 @@ public abstract class FileSystem extends Configured
   }
 
   /**
+   * Set quota for the given {@link Path}.
+   *
+   * @param src the target path to set quota for
+   * @param namespaceQuota the namespace quota (i.e., # of files/directories)
+   *   to set
+   * @param storagespaceQuota the storage space quota to set
+   * @throws IOException IO failure
+   */
+  public void setQuota(Path src, final long namespaceQuota,
+  final long storagespaceQuota) throws IOException {
+methodNotSupported();
+  }
+
+  /**
+   * Set per storage type quota for the given {@link Path}.
+   *
+   * @param src the target path to set storage type quota for
+   * @param type the storage type to set
+   * @param quota the quota to set for the given storage type
+   * @throws IOException IO failure
+   */
+  public void setQuotaByStorageType(Path src, final StorageType type,
+  final long quota) throws IOException {
+methodNotSupported();
+  }
+
+  /**
* The default filter accepts all paths.
*/
   private static final PathFilter DEFAULT_FILTER = new PathFilter() {
@@ -4297,6 +4324,22 @@ public abstract class FileSystem extends Configured
   }
 
   /**
+   * Helper method that throws an {@link UnsupportedOperationException} for the
+   * current {@link FileSystem} method being called.
+   */
+  private void methodNotSupported() {
+// The order of the stacktrace elements is (from top to bottom):
+//   - java.lang.Thread.getStackTrace
+//   - org.apache.hadoop.fs.FileSystem.methodNotSupported
+//   - 
+// therefore, to find out the current method name, we use the element at
+// index 2.
+String name = Thread.currentThread().getStackTrace()[2].getMethodName();
+throw new UnsupportedOperationException(getClass().getCanonicalName() +
+" does not support method " + name);
+  }
+
+  /**
* Create a Builder to append a file.
* @param path file path.
* @return a {@link FSDataOutputStreamBuilder} to build file append request.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
index 7c4dfe5..c16ea87 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
@@ -135,6 +135,8 @@ public class TestFilterFileSystem {
 public Path fixRelativePart(Path p);
 public ContentSummary getContentSummary(Path f);
 public QuotaUsage getQuotaUsage(Path f);
+void setQuota(Path f, long namespaceQuota, long storagespaceQuota);
+void setQuotaByStorageType(Path f, StorageType type, long quota);
 StorageStatistics getStorageStatistics();
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache

[hadoop] branch branch-3.1 updated: HDFS-15515: mkdirs on fallback should throw IOE out instead of suppressing and returning false (#2205)

2020-08-25 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 0808cb6  HDFS-15515: mkdirs on fallback should throw IOE out instead 
of suppressing and returning false (#2205)
0808cb6 is described below

commit 0808cb6355d54331f0395948eeb33ac161dd768c
Author: Uma Maheswara Rao G 
AuthorDate: Tue Aug 11 00:01:58 2020 -0700

HDFS-15515: mkdirs on fallback should throw IOE out instead of suppressing 
and returning false (#2205)

* HDFS-15515: mkdirs on fallback should throw IOE out instead of 
suppressing and returning false

* Used LambdaTestUtils#intercept in test

(cherry picked from commit 99b120a06e27add0b9070c829cd828d41a150e8c)
---
 .../src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java  | 7 +++
 .../apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java| 5 -
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index b441483..f545165 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1382,7 +1382,7 @@ public class ViewFileSystem extends FileSystem {
 
 @Override
 public boolean mkdirs(Path dir, FsPermission permission)
-throws AccessControlException, FileAlreadyExistsException {
+throws IOException {
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
@@ -1412,7 +1412,7 @@ public class ViewFileSystem extends FileSystem {
 .append(linkedFallbackFs.getUri());
 LOG.debug(msg.toString(), e);
   }
-  return false;
+  throw e;
 }
   }
 
@@ -1420,8 +1420,7 @@ public class ViewFileSystem extends FileSystem {
 }
 
 @Override
-public boolean mkdirs(Path dir)
-throws AccessControlException, FileAlreadyExistsException {
+public boolean mkdirs(Path dir) throws IOException {
   return mkdirs(dir, null);
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
index bd2b5af..e731760 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.fs.viewfs;
 
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
@@ -759,7 +760,9 @@ public class TestViewFileSystemLinkFallback extends 
ViewFileSystemBaseTest {
   cluster.shutdownNameNodes(); // Stopping fallback server
   // /user1/test1 does not exist in mount internal dir tree, it would
   // attempt to create in fallback.
-  assertFalse(vfs.mkdirs(nextLevelToInternalDir));
+  intercept(IOException.class, () -> {
+vfs.mkdirs(nextLevelToInternalDir);
+  });
   cluster.restartNameNodes();
   // should return true succeed when fallback fs is back to normal.
   assertTrue(vfs.mkdirs(nextLevelToInternalDir));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-15515: mkdirs on fallback should throw IOE out instead of suppressing and returning false (#2205)

2020-08-25 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 978ce5a  HDFS-15515: mkdirs on fallback should throw IOE out instead 
of suppressing and returning false (#2205)
978ce5a is described below

commit 978ce5a1ee40d6edfebc253150dfabd7a9846c2f
Author: Uma Maheswara Rao G 
AuthorDate: Tue Aug 11 00:01:58 2020 -0700

HDFS-15515: mkdirs on fallback should throw IOE out instead of suppressing 
and returning false (#2205)

* HDFS-15515: mkdirs on fallback should throw IOE out instead of 
suppressing and returning false

* Used LambdaTestUtils#intercept in test

(cherry picked from commit 99b120a06e27add0b9070c829cd828d41a150e8c)
---
 .../src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java  | 7 +++
 .../apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java| 5 -
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index dc700d7..2ffa8bd 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1414,7 +1414,7 @@ public class ViewFileSystem extends FileSystem {
 
 @Override
 public boolean mkdirs(Path dir, FsPermission permission)
-throws AccessControlException, FileAlreadyExistsException {
+throws IOException {
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
@@ -1444,7 +1444,7 @@ public class ViewFileSystem extends FileSystem {
 .append(linkedFallbackFs.getUri());
 LOG.debug(msg.toString(), e);
   }
-  return false;
+  throw e;
 }
   }
 
@@ -1452,8 +1452,7 @@ public class ViewFileSystem extends FileSystem {
 }
 
 @Override
-public boolean mkdirs(Path dir)
-throws AccessControlException, FileAlreadyExistsException {
+public boolean mkdirs(Path dir) throws IOException {
   return mkdirs(dir, null);
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
index bd2b5af..e731760 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.fs.viewfs;
 
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
@@ -759,7 +760,9 @@ public class TestViewFileSystemLinkFallback extends 
ViewFileSystemBaseTest {
   cluster.shutdownNameNodes(); // Stopping fallback server
   // /user1/test1 does not exist in mount internal dir tree, it would
   // attempt to create in fallback.
-  assertFalse(vfs.mkdirs(nextLevelToInternalDir));
+  intercept(IOException.class, () -> {
+vfs.mkdirs(nextLevelToInternalDir);
+  });
   cluster.restartNameNodes();
   // should return true succeed when fallback fs is back to normal.
   assertTrue(vfs.mkdirs(nextLevelToInternalDir));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside. (#2229). Contributed by Uma Maheswara Rao G.

2020-08-25 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ba0eca6  HDFS-15533: Provide DFS API compatible class, but use 
ViewFileSystemOverloadScheme inside. (#2229). Contributed by Uma Maheswara Rao 
G.
ba0eca6 is described below

commit ba0eca6a2c7d5e5aed607f5f31f0e8b1911cf1f8
Author: Uma Maheswara Rao G 
AuthorDate: Wed Aug 19 09:30:41 2020 -0700

HDFS-15533: Provide DFS API compatible class, but use 
ViewFileSystemOverloadScheme inside. (#2229). Contributed by Uma Maheswara Rao 
G.

(cherry picked from commit dd013f2fdf1ecbeb6c877e26951cd0d8922058b0)
---
 .../java/org/apache/hadoop/fs/FsConstants.java |1 -
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |7 +-
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|   11 +-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|   75 +-
 .../apache/hadoop/hdfs/DistributedFileSystem.java  |   14 +-
 .../hadoop/hdfs/ViewDistributedFileSystem.java | 2307 
 .../hadoop/hdfs/server/namenode/NameNode.java  |6 -
 ...FSOverloadSchemeWithMountTableConfigInHDFS.java |4 +-
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  142 +-
 .../hadoop/hdfs/TestDistributedFileSystem.java |   28 +-
 .../hadoop/hdfs/TestViewDistributedFileSystem.java |   47 +
 .../TestViewDistributedFileSystemContract.java |   94 +
 ...estViewDistributedFileSystemWithMountLinks.java |   64 +
 .../hdfs/server/namenode/TestCacheDirectives.java  |   27 +-
 .../namenode/TestCacheDirectivesWithViewDFS.java   |   56 +
 15 files changed, 2794 insertions(+), 89 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 344048f..6034542 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -45,5 +45,4 @@ public interface FsConstants {
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
   String VIEWFS_TYPE = "viewfs";
-  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 422e733..003694f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -599,9 +599,10 @@ abstract class InodeTree {
 
 if (!gotMountTableEntry) {
   if (!initingUriAsFallbackOnNoMounts) {
-throw new IOException(
-"ViewFs: Cannot initialize: Empty Mount table in config for "
-+ "viewfs://" + mountTableName + "/");
+throw new IOException(new StringBuilder(
+"ViewFs: Cannot initialize: Empty Mount table in config for ")
+.append(theUri.getScheme()).append("://").append(mountTableName)
+.append("/").toString());
   }
   StringBuilder msg =
   new StringBuilder("Empty mount table detected for ").append(theUri)
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index ad62f94..8c659d1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -259,13 +259,14 @@ public class ViewFileSystem extends FileSystem {
   }
 
   /**
-   * Returns the ViewFileSystem type.
-   * @return viewfs
+   * Returns false as it does not support to add fallback link automatically on
+   * no mounts.
*/
-  String getType() {
-return FsConstants.VIEWFS_TYPE;
+  boolean supportAutoAddingFallbackOnNoMounts() {
+return false;
   }
 
+
   /**
* Called after a new FileSystem instance is constructed.
* @param theUri a uri whose authority section names the host, port, etc. for
@@ -293,7 +294,7 @@ public class ViewFileSystem extends FileSystem {
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
-  !FsConstants.VIEWFS_TYPE.equals(getType());
+  supportAutoAddingFallbackOnNoMounts();
   fsState = new InodeTree(conf, tableName, myUri,
   initin

[hadoop] branch trunk updated (82ec28f -> dd013f2)

2020-08-19 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 82ec28f  YARN-10396. Max applications calculation per queue disregards 
queue level settings in absolute mode. Contributed by Benjamin Teke.
 add dd013f2  HDFS-15533: Provide DFS API compatible class, but use 
ViewFileSystemOverloadScheme inside. (#2229). Contributed by Uma Maheswara Rao 
G.

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/fs/FsConstants.java |1 -
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |7 +-
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|   11 +-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|   75 +-
 .../apache/hadoop/hdfs/DistributedFileSystem.java  |   14 +-
 .../hadoop/hdfs/ViewDistributedFileSystem.java | 2307 
 .../hadoop/hdfs/server/namenode/NameNode.java  |6 -
 ...FSOverloadSchemeWithMountTableConfigInHDFS.java |4 +-
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  142 +-
 .../hadoop/hdfs/TestDistributedFileSystem.java |   30 +-
 .../hadoop/hdfs/TestViewDistributedFileSystem.java |   47 +
 .../TestViewDistributedFileSystemContract.java}|   73 +-
 ...estViewDistributedFileSystemWithMountLinks.java |   64 +
 .../hdfs/server/namenode/TestCacheDirectives.java  |   27 +-
 .../namenode/TestCacheDirectivesWithViewDFS.java   |   56 +
 15 files changed, 2717 insertions(+), 147 deletions(-)
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
 copy 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/{fs/viewfs/TestViewFileSystemOverloadSchemeHdfsFileSystemContract.java
 => hdfs/TestViewDistributedFileSystemContract.java} (54%)
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystemWithMountLinks.java
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectivesWithViewDFS.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (592127b -> 8955a6c)

2020-08-11 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 592127b  HDFS-15520 Use visitor pattern to visit namespace tree (#2203)
 add 8955a6c  HDFS-15515: mkdirs on fallback should throw IOE out instead 
of suppressing and returning false (#2205)

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java  | 7 +++
 .../apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java| 5 -
 2 files changed, 7 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. (#2160). Contributed by Uma Maheswara Rao

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 1768618  HDFS-15478: When Empty mount points, we are assigning 
fallback link to self. But it should not use full URI for target fs. (#2160). 
Contributed by Uma Maheswara Rao G.
1768618 is described below

commit 1768618ab948fbd0cfdfa481a2ece124e10e33ec
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jul 21 23:29:10 2020 -0700

HDFS-15478: When Empty mount points, we are assigning fallback link to 
self. But it should not use full URI for target fs. (#2160). Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit ac9a07b51aefd0fd3b4602adc844ab0f172835e3)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 27 +++---
 .../src/site/markdown/ViewFsOverloadScheme.md  |  2 ++
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 484581c..b441483 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -292,7 +292,7 @@ public class ViewFileSystem extends FileSystem {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
   !FsConstants.VIEWFS_TYPE.equals(getType());
-  fsState = new InodeTree(conf, tableName, theUri,
+  fsState = new InodeTree(conf, tableName, myUri,
   initingUriAsFallbackOnNoMounts) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
index 300fdd8..7afc789 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
@@ -127,19 +127,30 @@ public class TestViewFsOverloadSchemeListStatus {
 
   /**
* Tests that ViewFSOverloadScheme should consider initialized fs as fallback
-   * if there are no mount links configured.
+   * if there are no mount links configured. It should add fallback with the
+   * chrootedFS at it's uri's root.
*/
   @Test(timeout = 3)
   public void testViewFSOverloadSchemeWithoutAnyMountLinks() throws Exception {
-try (FileSystem fs = FileSystem.get(TEST_DIR.toPath().toUri(), conf)) {
+Path initUri = new Path(TEST_DIR.toURI().toString(), "init");
+try (FileSystem fs = FileSystem.get(initUri.toUri(), conf)) {
   ViewFileSystemOverloadScheme vfs = (ViewFileSystemOverloadScheme) fs;
   assertEquals(0, vfs.getMountPoints().length);
-  Path testFallBack = new Path("test", FILE_NAME);
-  assertTrue(vfs.mkdirs(testFallBack));
-  FileStatus[] status = vfs.listStatus(testFallBack.getParent());
-  assertEquals(FILE_NAME, status[0].getPath().getName());
-  assertEquals(testFallBack.getName(),
-  vfs.getFileLinkStatus(testFallBack).getPath().getName());
+  Path testOnFallbackPath = new Path(TEST_DIR.toURI().toString(), "test");
+  assertTrue(vfs.mkdirs(testOnFallbackPath));
+  FileStatus[] status = vfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(Path.getPathWithoutSchemeAndAuthority(testOnFallbackPath),
+  Path.getPathWithoutSchemeAndAuthority(status[0].getPath()));
+  //Check directly on localFS. The fallBackFs(localFS) should be chrooted
+  //at it's root. So, after
+  FileSystem lfs = vfs.getRawFileSystem(testOnFallbackPath, conf);
+  FileStatus[] statusOnLocalFS =
+  lfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(testOnFallbackPath.getName(),
+  statusOnLocalFS[0].getPath().getName());
+  //initUri should not have exist in lfs, as it would have chrooted on it's
+  // root only.
+  assertFalse(lfs.exists(initUri));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
index 564bc03..f3eb336 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
@@ -34,6 

[hadoop] branch branch-3.1 updated: HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 544602e  HDFS-15464: ViewFsOverloadScheme should work when -fs option 
pointing to remote cluster without mount links (#2132). Contributed by Uma 
Maheswara Rao G.
544602e is described below

commit 544602e3d16a9a6e47c8851444f682d1fd4491d9
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 11 23:50:04 2020 -0700

HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3e700066394fb9f516e23537d8abb4661409cae1)
---
 .../java/org/apache/hadoop/fs/FsConstants.java |  2 ++
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 22 +---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +++-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 12 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 16 +++--
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 39 --
 .../src/site/markdown/ViewFsOverloadScheme.md  |  3 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 +++
 9 files changed, 102 insertions(+), 27 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 07c16b2..344048f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -44,4 +44,6 @@ public interface FsConstants {
   public static final String VIEWFS_SCHEME = "viewfs";
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
+  String VIEWFS_TYPE = "viewfs";
+  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 3d709b1..422e733 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -67,7 +68,7 @@ abstract class InodeTree {
   // the root of the mount table
   private final INode root;
   // the fallback filesystem
-  private final INodeLink rootFallbackLink;
+  private INodeLink rootFallbackLink;
   // the homedir for this mount table
   private final String homedirPrefix;
   private List> mountPoints = new ArrayList>();
@@ -460,7 +461,8 @@ abstract class InodeTree {
* @throws FileAlreadyExistsException
* @throws IOException
*/
-  protected InodeTree(final Configuration config, final String viewName)
+  protected InodeTree(final Configuration config, final String viewName,
+  final URI theUri, boolean initingUriAsFallbackOnNoMounts)
   throws UnsupportedFileSystemException, URISyntaxException,
   FileAlreadyExistsException, IOException {
 String mountTableName = viewName;
@@ -596,9 +598,19 @@ abstract class InodeTree {
 }
 
 if (!gotMountTableEntry) {
-  throw new IOException(
-  "ViewFs: Cannot initialize: Empty Mount table in config for " +
-  "viewfs://" + mountTableName + "/");
+  if (!initingUriAsFallbackOnNoMounts) {
+throw new IOException(
+"ViewFs: Cannot initialize: Empty Mount table in config for "
++ "viewfs://" + mountTableName + "/");
+  }
+  StringBuilder msg =
+  new StringBuilder("Empty mount table detected for ").append(theUri)
+  .append(" and considering itself as a linkFallback.");
+  FileSystem.LOG.info(msg.toString());
+  rootFallbackLink =
+  new INodeLink(mountTableName, ugi, getTargetFileSystem(theUri),
+  theUri);
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileS

[hadoop] branch branch-3.1 updated: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 7084b27  HDFS-15449. Optionally ignore port number in mount-table name 
when picking from initialized uri. Contributed by Uma Maheswara Rao G.
7084b27 is described below

commit 7084b273aca575292ac6834ff2a5f4d7c1b41ba9
Author: Uma Maheswara Rao G 
AuthorDate: Mon Jul 6 18:50:03 2020 -0700

HDFS-15449. Optionally ignore port number in mount-table name when picking 
from initialized uri. Contributed by Uma Maheswara Rao G.

(cherry picked from commit dc0626b5f2f2ba0bd3919650ea231cedd424f77a)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 10 -
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 13 ++-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  8 +++-
 ...SystemOverloadSchemeHdfsFileSystemContract.java |  4 ++
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 45 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 17 
 ...ViewFileSystemOverloadSchemeWithFSCommands.java |  2 +-
 8 files changed, 97 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 28ebf73..492cb87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -104,4 +104,17 @@ public interface Constants {
   "fs.viewfs.mount.links.as.symlinks";
 
   boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
+
+  /**
+   * When initializing the viewfs, authority will be used as the mount table
+   * name to find the mount link configurations. To make the mount table name
+   * unique, we may want to ignore port if initialized uri authority contains
+   * port number. By default, we will consider port number also in
+   * ViewFileSystem(This default value false, because to support existing
+   * deployments continue with the current behavior).
+   */
+  String CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME =
+  "fs.viewfs.ignore.port.in.mount.table.name";
+
+  boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e192bfc..1ca1759 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.viewfs;
 
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
@@ -272,9 +274,15 @@ public class ViewFileSystem extends FileSystem {
 final InnerCache innerCache = new InnerCache(fsGetter);
 // Now build  client side view (i.e. client side mount table) from config.
 final String authority = theUri.getAuthority();
+String tableName = authority;
+if (theUri.getPort() != -1 && config
+.getBoolean(CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
+CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT)) {
+  tableName = theUri.getHost();
+}
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
-  fsState = new InodeTree(conf, authority) {
+  fsState = new InodeTree(conf, tableName) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
   throws URISyntaxException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 672022b..2f3359d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOv

[hadoop] branch branch-3.1 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 49a7f9f  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
49a7f9f is described below

commit 49a7f9ff7b2fc73957512ffc7038c5103cf38137
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e87c145..e192bfc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -40,6 +40,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1141,7 +1142,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 770f43b..598a66d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -912,6 +914,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsE

[hadoop] branch branch-3.1 updated: HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new ab43b7b  HDFS-15450. Fix NN trash emptier to work if 
ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.
ab43b7b is described below

commit ab43b7bcfb294d4da1089c3acb01044deb845895
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 13:45:49 2020 -0700

HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 55a2ae80dc9b45413febd33840b8a653e3e29440)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  7 ++
 ...stNNStartupWhenViewFSOverloadSchemeEnabled.java | 88 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 4741c6c..c8cd8f7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.ha.ServiceFailedException;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HAUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -365,6 +366,7 @@ public class NameNode extends ReconfigurableBase implements
*/
   @Deprecated
   public static final int DEFAULT_PORT = DFS_NAMENODE_RPC_PORT_DEFAULT;
+  public static final String FS_HDFS_IMPL_KEY = "fs.hdfs.impl";
   public static final Logger LOG =
   LoggerFactory.getLogger(NameNode.class.getName());
   public static final Logger stateChangeLog =
@@ -704,6 +706,11 @@ public class NameNode extends ReconfigurableBase implements
   intervals);
   }
 }
+// Currently NN uses FileSystem.get to initialize DFS in startTrashEmptier.
+// If fs.hdfs.impl was overridden by core-site.xml, we may get other
+// filesystem. To make sure we get DFS, we are setting fs.hdfs.impl to DFS.
+// HDFS-15450
+conf.set(FS_HDFS_IMPL_KEY, DistributedFileSystem.class.getName());
 
 UserGroupInformation.setConfiguration(conf);
 loginAsNameNodeUser(conf);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
new file mode 100644
index 000..9d394c0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests that the NN startup is successful with ViewFSOverloadScheme.
+ */
+public class TestNNStartupWhenViewFSOverloadSchemeEnabled {
+  private MiniDFSCluster cluster;
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private static final Configuration CONF = new Configuration();
+
+  @BeforeClass
+  public static void setUp() {
+CONF.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+CONF.setInt(DFSConfigKeys

[hadoop] branch branch-3.2 updated: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. (#2160). Contributed by Uma Maheswara Rao

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new cd5efe9  HDFS-15478: When Empty mount points, we are assigning 
fallback link to self. But it should not use full URI for target fs. (#2160). 
Contributed by Uma Maheswara Rao G.
cd5efe9 is described below

commit cd5efe91d9dda4a67050f81aa18fa871e3e4ed8b
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jul 21 23:29:10 2020 -0700

HDFS-15478: When Empty mount points, we are assigning fallback link to 
self. But it should not use full URI for target fs. (#2160). Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit ac9a07b51aefd0fd3b4602adc844ab0f172835e3)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 27 +++---
 .../src/site/markdown/ViewFsOverloadScheme.md  |  2 ++
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 484581c..b441483 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -292,7 +292,7 @@ public class ViewFileSystem extends FileSystem {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
   !FsConstants.VIEWFS_TYPE.equals(getType());
-  fsState = new InodeTree(conf, tableName, theUri,
+  fsState = new InodeTree(conf, tableName, myUri,
   initingUriAsFallbackOnNoMounts) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
index 300fdd8..7afc789 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
@@ -127,19 +127,30 @@ public class TestViewFsOverloadSchemeListStatus {
 
   /**
* Tests that ViewFSOverloadScheme should consider initialized fs as fallback
-   * if there are no mount links configured.
+   * if there are no mount links configured. It should add fallback with the
+   * chrootedFS at it's uri's root.
*/
   @Test(timeout = 3)
   public void testViewFSOverloadSchemeWithoutAnyMountLinks() throws Exception {
-try (FileSystem fs = FileSystem.get(TEST_DIR.toPath().toUri(), conf)) {
+Path initUri = new Path(TEST_DIR.toURI().toString(), "init");
+try (FileSystem fs = FileSystem.get(initUri.toUri(), conf)) {
   ViewFileSystemOverloadScheme vfs = (ViewFileSystemOverloadScheme) fs;
   assertEquals(0, vfs.getMountPoints().length);
-  Path testFallBack = new Path("test", FILE_NAME);
-  assertTrue(vfs.mkdirs(testFallBack));
-  FileStatus[] status = vfs.listStatus(testFallBack.getParent());
-  assertEquals(FILE_NAME, status[0].getPath().getName());
-  assertEquals(testFallBack.getName(),
-  vfs.getFileLinkStatus(testFallBack).getPath().getName());
+  Path testOnFallbackPath = new Path(TEST_DIR.toURI().toString(), "test");
+  assertTrue(vfs.mkdirs(testOnFallbackPath));
+  FileStatus[] status = vfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(Path.getPathWithoutSchemeAndAuthority(testOnFallbackPath),
+  Path.getPathWithoutSchemeAndAuthority(status[0].getPath()));
+  //Check directly on localFS. The fallBackFs(localFS) should be chrooted
+  //at it's root. So, after
+  FileSystem lfs = vfs.getRawFileSystem(testOnFallbackPath, conf);
+  FileStatus[] statusOnLocalFS =
+  lfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(testOnFallbackPath.getName(),
+  statusOnLocalFS[0].getPath().getName());
+  //initUri should not have exist in lfs, as it would have chrooted on it's
+  // root only.
+  assertFalse(lfs.exists(initUri));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
index 564bc03..f3eb336 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
@@ -34,6 

[hadoop] branch branch-3.2 updated: HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 65778cd  HDFS-15464: ViewFsOverloadScheme should work when -fs option 
pointing to remote cluster without mount links (#2132). Contributed by Uma 
Maheswara Rao G.
65778cd is described below

commit 65778cdd474997b4cdeba7a3389bc4427f0e56d8
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 11 23:50:04 2020 -0700

HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3e700066394fb9f516e23537d8abb4661409cae1)
---
 .../java/org/apache/hadoop/fs/FsConstants.java |  2 ++
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 22 +---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +++-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 12 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 16 +++--
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 39 --
 .../src/site/markdown/ViewFsOverloadScheme.md  |  3 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 +++
 9 files changed, 102 insertions(+), 27 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 07c16b2..344048f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -44,4 +44,6 @@ public interface FsConstants {
   public static final String VIEWFS_SCHEME = "viewfs";
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
+  String VIEWFS_TYPE = "viewfs";
+  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 3d709b1..422e733 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -67,7 +68,7 @@ abstract class InodeTree {
   // the root of the mount table
   private final INode root;
   // the fallback filesystem
-  private final INodeLink rootFallbackLink;
+  private INodeLink rootFallbackLink;
   // the homedir for this mount table
   private final String homedirPrefix;
   private List> mountPoints = new ArrayList>();
@@ -460,7 +461,8 @@ abstract class InodeTree {
* @throws FileAlreadyExistsException
* @throws IOException
*/
-  protected InodeTree(final Configuration config, final String viewName)
+  protected InodeTree(final Configuration config, final String viewName,
+  final URI theUri, boolean initingUriAsFallbackOnNoMounts)
   throws UnsupportedFileSystemException, URISyntaxException,
   FileAlreadyExistsException, IOException {
 String mountTableName = viewName;
@@ -596,9 +598,19 @@ abstract class InodeTree {
 }
 
 if (!gotMountTableEntry) {
-  throw new IOException(
-  "ViewFs: Cannot initialize: Empty Mount table in config for " +
-  "viewfs://" + mountTableName + "/");
+  if (!initingUriAsFallbackOnNoMounts) {
+throw new IOException(
+"ViewFs: Cannot initialize: Empty Mount table in config for "
++ "viewfs://" + mountTableName + "/");
+  }
+  StringBuilder msg =
+  new StringBuilder("Empty mount table detected for ").append(theUri)
+  .append(" and considering itself as a linkFallback.");
+  FileSystem.LOG.info(msg.toString());
+  rootFallbackLink =
+  new INodeLink(mountTableName, ugi, getTargetFileSystem(theUri),
+  theUri);
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileS

[hadoop] branch branch-3.2 updated: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 1369e41  HDFS-15449. Optionally ignore port number in mount-table name 
when picking from initialized uri. Contributed by Uma Maheswara Rao G.
1369e41 is described below

commit 1369e41c6525937fffe45a10272dd7547eac2e1f
Author: Uma Maheswara Rao G 
AuthorDate: Mon Jul 6 18:50:03 2020 -0700

HDFS-15449. Optionally ignore port number in mount-table name when picking 
from initialized uri. Contributed by Uma Maheswara Rao G.

(cherry picked from commit dc0626b5f2f2ba0bd3919650ea231cedd424f77a)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 10 -
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 13 ++-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  8 +++-
 ...SystemOverloadSchemeHdfsFileSystemContract.java |  4 ++
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 45 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 17 
 ...ViewFileSystemOverloadSchemeWithFSCommands.java |  2 +-
 8 files changed, 97 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 28ebf73..492cb87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -104,4 +104,17 @@ public interface Constants {
   "fs.viewfs.mount.links.as.symlinks";
 
   boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
+
+  /**
+   * When initializing the viewfs, authority will be used as the mount table
+   * name to find the mount link configurations. To make the mount table name
+   * unique, we may want to ignore port if initialized uri authority contains
+   * port number. By default, we will consider port number also in
+   * ViewFileSystem(This default value false, because to support existing
+   * deployments continue with the current behavior).
+   */
+  String CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME =
+  "fs.viewfs.ignore.port.in.mount.table.name";
+
+  boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e192bfc..1ca1759 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.viewfs;
 
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
@@ -272,9 +274,15 @@ public class ViewFileSystem extends FileSystem {
 final InnerCache innerCache = new InnerCache(fsGetter);
 // Now build  client side view (i.e. client side mount table) from config.
 final String authority = theUri.getAuthority();
+String tableName = authority;
+if (theUri.getPort() != -1 && config
+.getBoolean(CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
+CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT)) {
+  tableName = theUri.getHost();
+}
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
-  fsState = new InodeTree(conf, authority) {
+  fsState = new InodeTree(conf, tableName) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
   throws URISyntaxException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 672022b..2f3359d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOv

[hadoop] branch branch-3.2 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 655b39c  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
655b39c is described below

commit 655b39cc302acfca0b00e6ade92ebb20984a777e
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e87c145..e192bfc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -40,6 +40,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1141,7 +1142,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 770f43b..598a66d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -912,6 +914,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsE

[hadoop] branch branch-3.2 updated: HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 512d1d6  HDFS-15450. Fix NN trash emptier to work if 
ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.
512d1d6 is described below

commit 512d1d6d272bb3f01b1e72f1de7908be87ac27de
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 13:45:49 2020 -0700

HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 55a2ae80dc9b45413febd33840b8a653e3e29440)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  7 ++
 ...stNNStartupWhenViewFSOverloadSchemeEnabled.java | 88 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 0fff970..30bf4f85 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.ha.ServiceFailedException;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HAUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -369,6 +370,7 @@ public class NameNode extends ReconfigurableBase implements
*/
   @Deprecated
   public static final int DEFAULT_PORT = DFS_NAMENODE_RPC_PORT_DEFAULT;
+  public static final String FS_HDFS_IMPL_KEY = "fs.hdfs.impl";
   public static final Logger LOG =
   LoggerFactory.getLogger(NameNode.class.getName());
   public static final Logger stateChangeLog =
@@ -708,6 +710,11 @@ public class NameNode extends ReconfigurableBase implements
   intervals);
   }
 }
+// Currently NN uses FileSystem.get to initialize DFS in startTrashEmptier.
+// If fs.hdfs.impl was overridden by core-site.xml, we may get other
+// filesystem. To make sure we get DFS, we are setting fs.hdfs.impl to DFS.
+// HDFS-15450
+conf.set(FS_HDFS_IMPL_KEY, DistributedFileSystem.class.getName());
 
 UserGroupInformation.setConfiguration(conf);
 loginAsNameNodeUser(conf);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
new file mode 100644
index 000..9d394c0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests that the NN startup is successful with ViewFSOverloadScheme.
+ */
+public class TestNNStartupWhenViewFSOverloadSchemeEnabled {
+  private MiniDFSCluster cluster;
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private static final Configuration CONF = new Configuration();
+
+  @BeforeClass
+  public static void setUp() {
+CONF.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+CONF.setInt(DFSConfigKeys

[hadoop] branch branch-3.3 updated: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 10f8010  HDFS-15449. Optionally ignore port number in mount-table name 
when picking from initialized uri. Contributed by Uma Maheswara Rao G.
10f8010 is described below

commit 10f8010519d41119c282031ec00d86da7f3b0506
Author: Uma Maheswara Rao G 
AuthorDate: Mon Jul 6 18:50:03 2020 -0700

HDFS-15449. Optionally ignore port number in mount-table name when picking 
from initialized uri. Contributed by Uma Maheswara Rao G.

(cherry picked from commit dc0626b5f2f2ba0bd3919650ea231cedd424f77a)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 10 -
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 13 ++-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  8 +++-
 ...SystemOverloadSchemeHdfsFileSystemContract.java |  4 ++
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 45 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 17 
 ...ViewFileSystemOverloadSchemeWithFSCommands.java |  2 +-
 8 files changed, 97 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 28ebf73..492cb87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -104,4 +104,17 @@ public interface Constants {
   "fs.viewfs.mount.links.as.symlinks";
 
   boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
+
+  /**
+   * When initializing the viewfs, authority will be used as the mount table
+   * name to find the mount link configurations. To make the mount table name
+   * unique, we may want to ignore port if initialized uri authority contains
+   * port number. By default, we will consider port number also in
+   * ViewFileSystem(This default value false, because to support existing
+   * deployments continue with the current behavior).
+   */
+  String CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME =
+  "fs.viewfs.ignore.port.in.mount.table.name";
+
+  boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index cb36965..0beeda2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.fs.viewfs;
 import static 
org.apache.hadoop.fs.impl.PathCapabilitiesSupport.validatePathCapabilityArgs;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
@@ -274,9 +276,15 @@ public class ViewFileSystem extends FileSystem {
 final InnerCache innerCache = new InnerCache(fsGetter);
 // Now build  client side view (i.e. client side mount table) from config.
 final String authority = theUri.getAuthority();
+String tableName = authority;
+if (theUri.getPort() != -1 && config
+.getBoolean(CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
+CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT)) {
+  tableName = theUri.getHost();
+}
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
-  fsState = new InodeTree(conf, authority) {
+  fsState = new InodeTree(conf, tableName) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
   throws URISyntaxException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 672022b..2f3359d 100644
--- 
a/h

[hadoop] branch branch-3.3 updated: HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ae8261c  HDFS-15464: ViewFsOverloadScheme should work when -fs option 
pointing to remote cluster without mount links (#2132). Contributed by Uma 
Maheswara Rao G.
ae8261c is described below

commit ae8261c6719008b89b886d533207a8cbcb22d36a
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 11 23:50:04 2020 -0700

HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3e700066394fb9f516e23537d8abb4661409cae1)
---
 .../java/org/apache/hadoop/fs/FsConstants.java |  2 ++
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 22 +---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +++-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 12 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 16 +++--
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 39 --
 .../src/site/markdown/ViewFsOverloadScheme.md  |  3 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 +++
 9 files changed, 102 insertions(+), 27 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 07c16b2..344048f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -44,4 +44,6 @@ public interface FsConstants {
   public static final String VIEWFS_SCHEME = "viewfs";
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
+  String VIEWFS_TYPE = "viewfs";
+  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 3d709b1..422e733 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -67,7 +68,7 @@ abstract class InodeTree {
   // the root of the mount table
   private final INode root;
   // the fallback filesystem
-  private final INodeLink rootFallbackLink;
+  private INodeLink rootFallbackLink;
   // the homedir for this mount table
   private final String homedirPrefix;
   private List> mountPoints = new ArrayList>();
@@ -460,7 +461,8 @@ abstract class InodeTree {
* @throws FileAlreadyExistsException
* @throws IOException
*/
-  protected InodeTree(final Configuration config, final String viewName)
+  protected InodeTree(final Configuration config, final String viewName,
+  final URI theUri, boolean initingUriAsFallbackOnNoMounts)
   throws UnsupportedFileSystemException, URISyntaxException,
   FileAlreadyExistsException, IOException {
 String mountTableName = viewName;
@@ -596,9 +598,19 @@ abstract class InodeTree {
 }
 
 if (!gotMountTableEntry) {
-  throw new IOException(
-  "ViewFs: Cannot initialize: Empty Mount table in config for " +
-  "viewfs://" + mountTableName + "/");
+  if (!initingUriAsFallbackOnNoMounts) {
+throw new IOException(
+"ViewFs: Cannot initialize: Empty Mount table in config for "
++ "viewfs://" + mountTableName + "/");
+  }
+  StringBuilder msg =
+  new StringBuilder("Empty mount table detected for ").append(theUri)
+  .append(" and considering itself as a linkFallback.");
+  FileSystem.LOG.info(msg.toString());
+  rootFallbackLink =
+  new INodeLink(mountTableName, ugi, getTargetFileSystem(theUri),
+  theUri);
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileS

[hadoop] branch branch-3.3 updated: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. (#2160). Contributed by Uma Maheswara Rao

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 4fe491d  HDFS-15478: When Empty mount points, we are assigning 
fallback link to self. But it should not use full URI for target fs. (#2160). 
Contributed by Uma Maheswara Rao G.
4fe491d is described below

commit 4fe491d10edd5e4e91ccf7fd76131e4552ce79a2
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jul 21 23:29:10 2020 -0700

HDFS-15478: When Empty mount points, we are assigning fallback link to 
self. But it should not use full URI for target fs. (#2160). Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit ac9a07b51aefd0fd3b4602adc844ab0f172835e3)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 27 +++---
 .../src/site/markdown/ViewFsOverloadScheme.md  |  2 ++
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 1fc531e..baf0027 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -294,7 +294,7 @@ public class ViewFileSystem extends FileSystem {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
   !FsConstants.VIEWFS_TYPE.equals(getType());
-  fsState = new InodeTree(conf, tableName, theUri,
+  fsState = new InodeTree(conf, tableName, myUri,
   initingUriAsFallbackOnNoMounts) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
index 300fdd8..7afc789 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
@@ -127,19 +127,30 @@ public class TestViewFsOverloadSchemeListStatus {
 
   /**
* Tests that ViewFSOverloadScheme should consider initialized fs as fallback
-   * if there are no mount links configured.
+   * if there are no mount links configured. It should add fallback with the
+   * chrootedFS at it's uri's root.
*/
   @Test(timeout = 3)
   public void testViewFSOverloadSchemeWithoutAnyMountLinks() throws Exception {
-try (FileSystem fs = FileSystem.get(TEST_DIR.toPath().toUri(), conf)) {
+Path initUri = new Path(TEST_DIR.toURI().toString(), "init");
+try (FileSystem fs = FileSystem.get(initUri.toUri(), conf)) {
   ViewFileSystemOverloadScheme vfs = (ViewFileSystemOverloadScheme) fs;
   assertEquals(0, vfs.getMountPoints().length);
-  Path testFallBack = new Path("test", FILE_NAME);
-  assertTrue(vfs.mkdirs(testFallBack));
-  FileStatus[] status = vfs.listStatus(testFallBack.getParent());
-  assertEquals(FILE_NAME, status[0].getPath().getName());
-  assertEquals(testFallBack.getName(),
-  vfs.getFileLinkStatus(testFallBack).getPath().getName());
+  Path testOnFallbackPath = new Path(TEST_DIR.toURI().toString(), "test");
+  assertTrue(vfs.mkdirs(testOnFallbackPath));
+  FileStatus[] status = vfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(Path.getPathWithoutSchemeAndAuthority(testOnFallbackPath),
+  Path.getPathWithoutSchemeAndAuthority(status[0].getPath()));
+  //Check directly on localFS. The fallBackFs(localFS) should be chrooted
+  //at it's root. So, after
+  FileSystem lfs = vfs.getRawFileSystem(testOnFallbackPath, conf);
+  FileStatus[] statusOnLocalFS =
+  lfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(testOnFallbackPath.getName(),
+  statusOnLocalFS[0].getPath().getName());
+  //initUri should not have exist in lfs, as it would have chrooted on it's
+  // root only.
+  assertFalse(lfs.exists(initUri));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
index 564bc03..f3eb336 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
@@ -34,6 

[hadoop] branch branch-3.3 updated: HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new aea1a8e  HDFS-15450. Fix NN trash emptier to work if 
ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.
aea1a8e is described below

commit aea1a8e2bd780a2295bd1aa83640e733c3385a6a
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 13:45:49 2020 -0700

HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 55a2ae80dc9b45413febd33840b8a653e3e29440)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  7 ++
 ...stNNStartupWhenViewFSOverloadSchemeEnabled.java | 88 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 74757e5..7c2026c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.ha.ServiceFailedException;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HAUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -384,6 +385,7 @@ public class NameNode extends ReconfigurableBase implements
*/
   @Deprecated
   public static final int DEFAULT_PORT = DFS_NAMENODE_RPC_PORT_DEFAULT;
+  public static final String FS_HDFS_IMPL_KEY = "fs.hdfs.impl";
   public static final Logger LOG =
   LoggerFactory.getLogger(NameNode.class.getName());
   public static final Logger stateChangeLog =
@@ -725,6 +727,11 @@ public class NameNode extends ReconfigurableBase implements
   intervals);
   }
 }
+// Currently NN uses FileSystem.get to initialize DFS in startTrashEmptier.
+// If fs.hdfs.impl was overridden by core-site.xml, we may get other
+// filesystem. To make sure we get DFS, we are setting fs.hdfs.impl to DFS.
+// HDFS-15450
+conf.set(FS_HDFS_IMPL_KEY, DistributedFileSystem.class.getName());
 
 UserGroupInformation.setConfiguration(conf);
 loginAsNameNodeUser(conf);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
new file mode 100644
index 000..9d394c0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests that the NN startup is successful with ViewFSOverloadScheme.
+ */
+public class TestNNStartupWhenViewFSOverloadSchemeEnabled {
+  private MiniDFSCluster cluster;
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private static final Configuration CONF = new Configuration();
+
+  @BeforeClass
+  public static void setUp() {
+CONF.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+CONF.setInt(DFSConfigKeys

[hadoop] branch branch-3.3 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 35fe6fd  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
35fe6fd is described below

commit 35fe6fd54fdc935ed73fa080925c812fe6f493a2
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 39d78cf..cb36965 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -41,6 +41,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1180,7 +1181,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index c769003..a63960c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -919,6 +921,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsE

[hadoop] branch branch-3.3 updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-31 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 35fe6fd  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
35fe6fd is described below

commit 35fe6fd54fdc935ed73fa080925c812fe6f493a2
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 39d78cf..cb36965 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -41,6 +41,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1180,7 +1181,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index c769003..a63960c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -919,6 +921,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsE

[hadoop] branch trunk updated: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs. (#2160). Contributed by Uma Maheswara Rao G.

2020-07-22 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ac9a07b  HDFS-15478: When Empty mount points, we are assigning 
fallback link to self. But it should not use full URI for target fs. (#2160). 
Contributed by Uma Maheswara Rao G.
ac9a07b is described below

commit ac9a07b51aefd0fd3b4602adc844ab0f172835e3
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jul 21 23:29:10 2020 -0700

HDFS-15478: When Empty mount points, we are assigning fallback link to 
self. But it should not use full URI for target fs. (#2160). Contributed by Uma 
Maheswara Rao G.
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 27 +++---
 .../src/site/markdown/ViewFsOverloadScheme.md  |  2 ++
 3 files changed, 22 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 1fc531e..baf0027 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -294,7 +294,7 @@ public class ViewFileSystem extends FileSystem {
   myUri = new URI(getScheme(), authority, "/", null, null);
   boolean initingUriAsFallbackOnNoMounts =
   !FsConstants.VIEWFS_TYPE.equals(getType());
-  fsState = new InodeTree(conf, tableName, theUri,
+  fsState = new InodeTree(conf, tableName, myUri,
   initingUriAsFallbackOnNoMounts) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
index 300fdd8..7afc789 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeListStatus.java
@@ -127,19 +127,30 @@ public class TestViewFsOverloadSchemeListStatus {
 
   /**
* Tests that ViewFSOverloadScheme should consider initialized fs as fallback
-   * if there are no mount links configured.
+   * if there are no mount links configured. It should add fallback with the
+   * chrootedFS at it's uri's root.
*/
   @Test(timeout = 3)
   public void testViewFSOverloadSchemeWithoutAnyMountLinks() throws Exception {
-try (FileSystem fs = FileSystem.get(TEST_DIR.toPath().toUri(), conf)) {
+Path initUri = new Path(TEST_DIR.toURI().toString(), "init");
+try (FileSystem fs = FileSystem.get(initUri.toUri(), conf)) {
   ViewFileSystemOverloadScheme vfs = (ViewFileSystemOverloadScheme) fs;
   assertEquals(0, vfs.getMountPoints().length);
-  Path testFallBack = new Path("test", FILE_NAME);
-  assertTrue(vfs.mkdirs(testFallBack));
-  FileStatus[] status = vfs.listStatus(testFallBack.getParent());
-  assertEquals(FILE_NAME, status[0].getPath().getName());
-  assertEquals(testFallBack.getName(),
-  vfs.getFileLinkStatus(testFallBack).getPath().getName());
+  Path testOnFallbackPath = new Path(TEST_DIR.toURI().toString(), "test");
+  assertTrue(vfs.mkdirs(testOnFallbackPath));
+  FileStatus[] status = vfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(Path.getPathWithoutSchemeAndAuthority(testOnFallbackPath),
+  Path.getPathWithoutSchemeAndAuthority(status[0].getPath()));
+  //Check directly on localFS. The fallBackFs(localFS) should be chrooted
+  //at it's root. So, after
+  FileSystem lfs = vfs.getRawFileSystem(testOnFallbackPath, conf);
+  FileStatus[] statusOnLocalFS =
+  lfs.listStatus(testOnFallbackPath.getParent());
+  assertEquals(testOnFallbackPath.getName(),
+  statusOnLocalFS[0].getPath().getName());
+  //initUri should not have exist in lfs, as it would have chrooted on it's
+  // root only.
+  assertFalse(lfs.exists(initUri));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
index 564bc03..f3eb336 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md
@@ -34,6 +34,8 @@ If a user wants to continue use the same fs.defaultFS and 
wants to have mo

[hadoop] branch trunk updated: HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.

2020-07-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3e70006  HDFS-15464: ViewFsOverloadScheme should work when -fs option 
pointing to remote cluster without mount links (#2132). Contributed by Uma 
Maheswara Rao G.
3e70006 is described below

commit 3e700066394fb9f516e23537d8abb4661409cae1
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 11 23:50:04 2020 -0700

HDFS-15464: ViewFsOverloadScheme should work when -fs option pointing to 
remote cluster without mount links (#2132). Contributed by Uma Maheswara Rao G.
---
 .../java/org/apache/hadoop/fs/FsConstants.java |  2 ++
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 22 +---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 13 +++-
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 12 +++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 16 +++--
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  2 +-
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 39 --
 .../src/site/markdown/ViewFsOverloadScheme.md  |  3 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 +++
 9 files changed, 102 insertions(+), 27 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
index 07c16b2..344048f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
@@ -44,4 +44,6 @@ public interface FsConstants {
   public static final String VIEWFS_SCHEME = "viewfs";
   String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN =
   "fs.viewfs.overload.scheme.target.%s.impl";
+  String VIEWFS_TYPE = "viewfs";
+  String VIEWFSOS_TYPE = "viewfsOverloadScheme";
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 3d709b1..422e733 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -67,7 +68,7 @@ abstract class InodeTree {
   // the root of the mount table
   private final INode root;
   // the fallback filesystem
-  private final INodeLink rootFallbackLink;
+  private INodeLink rootFallbackLink;
   // the homedir for this mount table
   private final String homedirPrefix;
   private List> mountPoints = new ArrayList>();
@@ -460,7 +461,8 @@ abstract class InodeTree {
* @throws FileAlreadyExistsException
* @throws IOException
*/
-  protected InodeTree(final Configuration config, final String viewName)
+  protected InodeTree(final Configuration config, final String viewName,
+  final URI theUri, boolean initingUriAsFallbackOnNoMounts)
   throws UnsupportedFileSystemException, URISyntaxException,
   FileAlreadyExistsException, IOException {
 String mountTableName = viewName;
@@ -596,9 +598,19 @@ abstract class InodeTree {
 }
 
 if (!gotMountTableEntry) {
-  throw new IOException(
-  "ViewFs: Cannot initialize: Empty Mount table in config for " +
-  "viewfs://" + mountTableName + "/");
+  if (!initingUriAsFallbackOnNoMounts) {
+throw new IOException(
+"ViewFs: Cannot initialize: Empty Mount table in config for "
++ "viewfs://" + mountTableName + "/");
+  }
+  StringBuilder msg =
+  new StringBuilder("Empty mount table detected for ").append(theUri)
+  .append(" and considering itself as a linkFallback.");
+  FileSystem.LOG.info(msg.toString());
+  rootFallbackLink =
+  new INodeLink(mountTableName, ugi, getTargetFileSystem(theUri),
+  theUri);
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewf

[hadoop] branch branch-3.1 updated: HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml (#2131)

2020-07-09 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 47df224  HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to 
core-default.xml (#2131)
47df224 is described below

commit 47df2245d381001f6b89af0cf1670fe2dccf0108
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Thu Jul 9 12:38:52 2020 -0700

HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to 
core-default.xml (#2131)

(cherry picked from commit 0e694b20b9d59cc46882df506dcea386020b1e4d)
---
 .../hadoop-common/src/main/resources/core-default.xml | 8 
 .../org/apache/hadoop/conf/TestCommonConfigurationFields.java | 1 +
 2 files changed, 9 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 1f93394..82db132 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -924,6 +924,14 @@
 
 
 
+  fs.viewfs.overload.scheme.target.ofs.impl
+  org.apache.hadoop.fs.ozone.RootedOzoneFileSystem
+  The RootedOzoneFileSystem for view file system overload scheme
+when child file system and ViewFSOverloadScheme's schemes are ofs.
+  
+
+
+
   fs.viewfs.overload.scheme.target.o3fs.impl
   org.apache.hadoop.fs.ozone.OzoneFileSystem
   The OzoneFileSystem for view file system overload scheme when
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
index 4c9b8bb..9b89f11 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
@@ -126,6 +126,7 @@ public class TestCommonConfigurationFields extends 
TestConfigurationFieldsBase {
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.hdfs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.http.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.https.impl");
+xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.ofs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.o3fs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.oss.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.s3a.impl");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml (#2131)

2020-07-09 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 0bd0cb1  HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to 
core-default.xml (#2131)
0bd0cb1 is described below

commit 0bd0cb1b06102d3b52beaf733ade8593ed4b3ca6
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Thu Jul 9 12:38:52 2020 -0700

HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to 
core-default.xml (#2131)

(cherry picked from commit 0e694b20b9d59cc46882df506dcea386020b1e4d)
---
 .../hadoop-common/src/main/resources/core-default.xml | 8 
 .../org/apache/hadoop/conf/TestCommonConfigurationFields.java | 1 +
 2 files changed, 9 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 79d5199..d3080e1 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -926,6 +926,14 @@
 
 
 
+  fs.viewfs.overload.scheme.target.ofs.impl
+  org.apache.hadoop.fs.ozone.RootedOzoneFileSystem
+  The RootedOzoneFileSystem for view file system overload scheme
+when child file system and ViewFSOverloadScheme's schemes are ofs.
+  
+
+
+
   fs.viewfs.overload.scheme.target.o3fs.impl
   org.apache.hadoop.fs.ozone.OzoneFileSystem
   The OzoneFileSystem for view file system overload scheme when
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
index bd93f7e..04b6db6 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
@@ -132,6 +132,7 @@ public class TestCommonConfigurationFields extends 
TestConfigurationFieldsBase {
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.hdfs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.http.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.https.impl");
+xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.ofs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.o3fs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.oss.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.s3a.impl");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to core-default.xml (#2131)

2020-07-09 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 3589340  HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to 
core-default.xml (#2131)
3589340 is described below

commit 358934059f6421166caf76e8e70d9e852b1ce8dc
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Thu Jul 9 12:38:52 2020 -0700

HDFS-15462. Add fs.viewfs.overload.scheme.target.ofs.impl to 
core-default.xml (#2131)

(cherry picked from commit 0e694b20b9d59cc46882df506dcea386020b1e4d)
---
 .../hadoop-common/src/main/resources/core-default.xml | 8 
 .../org/apache/hadoop/conf/TestCommonConfigurationFields.java | 1 +
 2 files changed, 9 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index accb1b9..cf156af 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -968,6 +968,14 @@
 
 
 
+  fs.viewfs.overload.scheme.target.ofs.impl
+  org.apache.hadoop.fs.ozone.RootedOzoneFileSystem
+  The RootedOzoneFileSystem for view file system overload scheme
+when child file system and ViewFSOverloadScheme's schemes are ofs.
+  
+
+
+
   fs.viewfs.overload.scheme.target.o3fs.impl
   org.apache.hadoop.fs.ozone.OzoneFileSystem
   The OzoneFileSystem for view file system overload scheme when
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
index 3b9947e..dd9f41a 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
@@ -133,6 +133,7 @@ public class TestCommonConfigurationFields extends 
TestConfigurationFieldsBase {
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.hdfs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.http.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.https.impl");
+xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.ofs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.o3fs.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.oss.impl");
 xmlPropsToSkipCompare.add("fs.viewfs.overload.scheme.target.s3a.impl");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-15394. Add all available fs.viewfs.overload.scheme.target..impl classes in core-default.xml bydefault. Contributed by Uma Maheswara Rao G.

2020-07-09 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 4e1abe6  HDFS-15394. Add all available 
fs.viewfs.overload.scheme.target..impl classes in core-default.xml 
bydefault. Contributed by Uma Maheswara Rao G.
4e1abe6 is described below

commit 4e1abe61a28517d04e168d260ac2942f0c65d388
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jun 6 08:11:57 2020 -0700

HDFS-15394. Add all available 
fs.viewfs.overload.scheme.target..impl classes in core-default.xml 
bydefault. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3ca15292c5584ec220b3eeaf76da85d228bcbd8b)
---
 .../src/main/resources/core-default.xml| 110 +
 .../hadoop/conf/TestCommonConfigurationFields.java |  18 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |   9 +-
 3 files changed, 136 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 01ac52e..1f93394 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -909,6 +909,116 @@
 
 
 
+  fs.viewfs.overload.scheme.target.hdfs.impl
+  org.apache.hadoop.hdfs.DistributedFileSystem
+  The DistributedFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are hdfs.
+   
+
+
+
+  fs.viewfs.overload.scheme.target.s3a.impl
+  org.apache.hadoop.fs.s3a.S3AFileSystem
+  The S3AFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are s3a.
+
+
+
+  fs.viewfs.overload.scheme.target.o3fs.impl
+  org.apache.hadoop.fs.ozone.OzoneFileSystem
+  The OzoneFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are o3fs.
+
+
+
+  fs.viewfs.overload.scheme.target.ftp.impl
+  org.apache.hadoop.fs.ftp.FTPFileSystem
+  The FTPFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are ftp.
+   
+
+
+
+  fs.viewfs.overload.scheme.target.webhdfs.impl
+  org.apache.hadoop.hdfs.web.WebHdfsFileSystem
+  The WebHdfsFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are webhdfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.swebhdfs.impl
+  org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
+  The SWebHdfsFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are swebhdfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.file.impl
+  org.apache.hadoop.fs.LocalFileSystem
+  The LocalFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are file.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.abfs.impl
+  org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem
+  The AzureBlobFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are abfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.abfss.impl
+  org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem
+  The SecureAzureBlobFileSystem for view file system overload
+   scheme when child file system and ViewFSOverloadScheme's schemes are abfss.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.wasb.impl
+  org.apache.hadoop.fs.azure.NativeAzureFileSystem
+  The NativeAzureFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are wasb.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.swift.impl
+  org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
+  The SwiftNativeFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are swift.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.oss.impl
+  org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
+  The AliyunOSSFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are oss.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.http.impl
+  org.apache.hadoop.fs.http.HttpFileSystem
+  The HttpFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are http.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.https.impl
+  org.apache.hadoop.fs.http.HttpsFileSystem
+  The HttpsFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are https.
+  
+
+
+
   fs.AbstractFileSystem.ftp.impl
   org.apache.hadoop.fs.ftp.FtpFs
   The FileSystem for Ftp: uris.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf

[hadoop] branch branch-3.2 updated: HDFS-15394. Add all available fs.viewfs.overload.scheme.target..impl classes in core-default.xml bydefault. Contributed by Uma Maheswara Rao G.

2020-07-09 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 7607f24  HDFS-15394. Add all available 
fs.viewfs.overload.scheme.target..impl classes in core-default.xml 
bydefault. Contributed by Uma Maheswara Rao G.
7607f24 is described below

commit 7607f24c9e917591037f7895ba6a23390fe4b2d8
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jun 6 08:11:57 2020 -0700

HDFS-15394. Add all available 
fs.viewfs.overload.scheme.target..impl classes in core-default.xml 
bydefault. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3ca15292c5584ec220b3eeaf76da85d228bcbd8b)
---
 .../src/main/resources/core-default.xml| 110 +
 .../hadoop/conf/TestCommonConfigurationFields.java |  18 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |   9 +-
 3 files changed, 136 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 4374fb3..79d5199 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -911,6 +911,116 @@
 
 
 
+  fs.viewfs.overload.scheme.target.hdfs.impl
+  org.apache.hadoop.hdfs.DistributedFileSystem
+  The DistributedFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are hdfs.
+   
+
+
+
+  fs.viewfs.overload.scheme.target.s3a.impl
+  org.apache.hadoop.fs.s3a.S3AFileSystem
+  The S3AFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are s3a.
+
+
+
+  fs.viewfs.overload.scheme.target.o3fs.impl
+  org.apache.hadoop.fs.ozone.OzoneFileSystem
+  The OzoneFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are o3fs.
+
+
+
+  fs.viewfs.overload.scheme.target.ftp.impl
+  org.apache.hadoop.fs.ftp.FTPFileSystem
+  The FTPFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are ftp.
+   
+
+
+
+  fs.viewfs.overload.scheme.target.webhdfs.impl
+  org.apache.hadoop.hdfs.web.WebHdfsFileSystem
+  The WebHdfsFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are webhdfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.swebhdfs.impl
+  org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
+  The SWebHdfsFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are swebhdfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.file.impl
+  org.apache.hadoop.fs.LocalFileSystem
+  The LocalFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are file.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.abfs.impl
+  org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem
+  The AzureBlobFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are abfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.abfss.impl
+  org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem
+  The SecureAzureBlobFileSystem for view file system overload
+   scheme when child file system and ViewFSOverloadScheme's schemes are abfss.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.wasb.impl
+  org.apache.hadoop.fs.azure.NativeAzureFileSystem
+  The NativeAzureFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are wasb.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.swift.impl
+  org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
+  The SwiftNativeFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are swift.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.oss.impl
+  org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
+  The AliyunOSSFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are oss.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.http.impl
+  org.apache.hadoop.fs.http.HttpFileSystem
+  The HttpFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are http.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.https.impl
+  org.apache.hadoop.fs.http.HttpsFileSystem
+  The HttpsFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are https.
+  
+
+
+
   fs.AbstractFileSystem.ftp.impl
   org.apache.hadoop.fs.ftp.FtpFs
   The FileSystem for Ftp: uris.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf

[hadoop] branch branch-3.3 updated: HDFS-15394. Add all available fs.viewfs.overload.scheme.target..impl classes in core-default.xml bydefault. Contributed by Uma Maheswara Rao G.

2020-07-09 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new f85ce25  HDFS-15394. Add all available 
fs.viewfs.overload.scheme.target..impl classes in core-default.xml 
bydefault. Contributed by Uma Maheswara Rao G.
f85ce25 is described below

commit f85ce2570e796e0ca838535340cd3c049634a225
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jun 6 08:11:57 2020 -0700

HDFS-15394. Add all available 
fs.viewfs.overload.scheme.target..impl classes in core-default.xml 
bydefault. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 3ca15292c5584ec220b3eeaf76da85d228bcbd8b)
---
 .../src/main/resources/core-default.xml| 110 +
 .../hadoop/conf/TestCommonConfigurationFields.java |  18 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |   9 +-
 3 files changed, 136 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 0d583cc..accb1b9 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -953,6 +953,116 @@
 
 
 
+  fs.viewfs.overload.scheme.target.hdfs.impl
+  org.apache.hadoop.hdfs.DistributedFileSystem
+  The DistributedFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are hdfs.
+   
+
+
+
+  fs.viewfs.overload.scheme.target.s3a.impl
+  org.apache.hadoop.fs.s3a.S3AFileSystem
+  The S3AFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are s3a.
+
+
+
+  fs.viewfs.overload.scheme.target.o3fs.impl
+  org.apache.hadoop.fs.ozone.OzoneFileSystem
+  The OzoneFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are o3fs.
+
+
+
+  fs.viewfs.overload.scheme.target.ftp.impl
+  org.apache.hadoop.fs.ftp.FTPFileSystem
+  The FTPFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are ftp.
+   
+
+
+
+  fs.viewfs.overload.scheme.target.webhdfs.impl
+  org.apache.hadoop.hdfs.web.WebHdfsFileSystem
+  The WebHdfsFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are webhdfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.swebhdfs.impl
+  org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
+  The SWebHdfsFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are swebhdfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.file.impl
+  org.apache.hadoop.fs.LocalFileSystem
+  The LocalFileSystem for view file system overload scheme when
+   child file system and ViewFSOverloadScheme's schemes are file.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.abfs.impl
+  org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem
+  The AzureBlobFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are abfs.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.abfss.impl
+  org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem
+  The SecureAzureBlobFileSystem for view file system overload
+   scheme when child file system and ViewFSOverloadScheme's schemes are abfss.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.wasb.impl
+  org.apache.hadoop.fs.azure.NativeAzureFileSystem
+  The NativeAzureFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are wasb.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.swift.impl
+  org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
+  The SwiftNativeFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are swift.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.oss.impl
+  org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
+  The AliyunOSSFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are oss.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.http.impl
+  org.apache.hadoop.fs.http.HttpFileSystem
+  The HttpFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are http.
+  
+
+
+
+  fs.viewfs.overload.scheme.target.https.impl
+  org.apache.hadoop.fs.http.HttpsFileSystem
+  The HttpsFileSystem for view file system overload scheme
+   when child file system and ViewFSOverloadScheme's schemes are https.
+  
+
+
+
   fs.AbstractFileSystem.ftp.impl
   org.apache.hadoop.fs.ftp.FtpFs
   The FileSystem for Ftp: uris.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf

[hadoop] branch trunk updated: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G.

2020-07-06 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new dc0626b  HDFS-15449. Optionally ignore port number in mount-table name 
when picking from initialized uri. Contributed by Uma Maheswara Rao G.
dc0626b is described below

commit dc0626b5f2f2ba0bd3919650ea231cedd424f77a
Author: Uma Maheswara Rao G 
AuthorDate: Mon Jul 6 18:50:03 2020 -0700

HDFS-15449. Optionally ignore port number in mount-table name when picking 
from initialized uri. Contributed by Uma Maheswara Rao G.
---
 .../org/apache/hadoop/fs/viewfs/Constants.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 10 -
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 13 ++-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  8 +++-
 ...SystemOverloadSchemeHdfsFileSystemContract.java |  4 ++
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 45 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 17 
 ...ViewFileSystemOverloadSchemeWithFSCommands.java |  2 +-
 8 files changed, 97 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 28ebf73..492cb87 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -104,4 +104,17 @@ public interface Constants {
   "fs.viewfs.mount.links.as.symlinks";
 
   boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
+
+  /**
+   * When initializing the viewfs, authority will be used as the mount table
+   * name to find the mount link configurations. To make the mount table name
+   * unique, we may want to ignore port if initialized uri authority contains
+   * port number. By default, we will consider port number also in
+   * ViewFileSystem(This default value false, because to support existing
+   * deployments continue with the current behavior).
+   */
+  String CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME =
+  "fs.viewfs.ignore.port.in.mount.table.name";
+
+  boolean CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT = false;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index cb36965..0beeda2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.fs.viewfs;
 import static 
org.apache.hadoop.fs.impl.PathCapabilitiesSupport.validatePathCapabilityArgs;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
@@ -274,9 +276,15 @@ public class ViewFileSystem extends FileSystem {
 final InnerCache innerCache = new InnerCache(fsGetter);
 // Now build  client side view (i.e. client side mount table) from config.
 final String authority = theUri.getAuthority();
+String tableName = authority;
+if (theUri.getPort() != -1 && config
+.getBoolean(CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME,
+CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME_DEFAULT)) {
+  tableName = theUri.getHost();
+}
 try {
   myUri = new URI(getScheme(), authority, "/", null, null);
-  fsState = new InodeTree(conf, authority) {
+  fsState = new InodeTree(conf, tableName) {
 @Override
 protected FileSystem getTargetFileSystem(final URI uri)
   throws URISyntaxException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index 672022b..2f3359d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileS

[hadoop] branch trunk updated: HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.

2020-07-04 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 55a2ae8  HDFS-15450. Fix NN trash emptier to work if 
ViewFSOveroadScheme enabled. Contributed by Uma Maheswara Rao G.
55a2ae8 is described below

commit 55a2ae80dc9b45413febd33840b8a653e3e29440
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 13:45:49 2020 -0700

HDFS-15450. Fix NN trash emptier to work if ViewFSOveroadScheme enabled. 
Contributed by Uma Maheswara Rao G.
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  7 ++
 ...stNNStartupWhenViewFSOverloadSchemeEnabled.java | 88 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 74757e5..7c2026c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.ha.ServiceFailedException;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HAUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -384,6 +385,7 @@ public class NameNode extends ReconfigurableBase implements
*/
   @Deprecated
   public static final int DEFAULT_PORT = DFS_NAMENODE_RPC_PORT_DEFAULT;
+  public static final String FS_HDFS_IMPL_KEY = "fs.hdfs.impl";
   public static final Logger LOG =
   LoggerFactory.getLogger(NameNode.class.getName());
   public static final Logger stateChangeLog =
@@ -725,6 +727,11 @@ public class NameNode extends ReconfigurableBase implements
   intervals);
   }
 }
+// Currently NN uses FileSystem.get to initialize DFS in startTrashEmptier.
+// If fs.hdfs.impl was overridden by core-site.xml, we may get other
+// filesystem. To make sure we get DFS, we are setting fs.hdfs.impl to DFS.
+// HDFS-15450
+conf.set(FS_HDFS_IMPL_KEY, DistributedFileSystem.class.getName());
 
 UserGroupInformation.setConfiguration(conf);
 loginAsNameNodeUser(conf);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
new file mode 100644
index 000..9d394c0
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestNNStartupWhenViewFSOverloadSchemeEnabled.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests that the NN startup is successful with ViewFSOverloadScheme.
+ */
+public class TestNNStartupWhenViewFSOverloadSchemeEnabled {
+  private MiniDFSCluster cluster;
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private static final Configuration CONF = new Configuration();
+
+  @BeforeClass
+  public static void setUp() {
+CONF.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+CONF.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
+CONF.setInt(
+CommonConfigurati

[hadoop] branch trunk updated: HDFS-15430. create should work when parent dir is internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-07-04 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1f2a80b  HDFS-15430. create should work when parent dir is internalDir 
and fallback configured. Contributed by Uma Maheswara Rao G.
1f2a80b is described below

commit 1f2a80b5e5024aeb7fb1f8c31b8fdd0fdb88bb66
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jul 4 00:12:10 2020 -0700

HDFS-15430. create should work when parent dir is internalDir and fallback 
configured. Contributed by Uma Maheswara Rao G.
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  37 -
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  37 +
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 148 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |  28 
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 154 +
 5 files changed, 375 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 39d78cf..cb36965 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -41,6 +41,7 @@ import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -1180,7 +1181,41 @@ public class ViewFileSystem extends FileSystem {
 public FSDataOutputStream create(final Path f,
 final FsPermission permission, final boolean overwrite,
 final int bufferSize, final short replication, final long blockSize,
-final Progressable progress) throws AccessControlException {
+final Progressable progress) throws IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The directory / already exist at: "
++ theInternalDir.fullPath);
+  }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+
+if (theInternalDir.getChildren().containsKey(f.getName())) {
+  throw new FileAlreadyExistsException(
+  "A mount path(file/dir) already exist with the requested path: "
+  + theInternalDir.getChildren().get(f.getName()).fullPath);
+}
+
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leaf = f.getName();
+Path fileToCreate = new Path(parent, leaf);
+
+try {
+  return linkedFallbackFs
+  .create(fileToCreate, permission, overwrite, bufferSize,
+  replication, blockSize, progress);
+} catch (IOException e) {
+  StringBuilder msg =
+  new StringBuilder("Failed to create file:").append(fileToCreate)
+  .append(" at fallback : ").append(linkedFallbackFs.getUri());
+  LOG.error(msg.toString(), e);
+  throw e;
+}
+  }
   throw readOnlyMountTable("create", f);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index c769003..a63960c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -33,6 +33,8 @@ import java.util.Map;
 import java.util.Map.Entry;
 
 import java.util.Set;
+
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -919,6 +921,41 @@ public class ViewFs extends AbstractFileSystem {
 FileAlreadyExistsException, FileNotFoundException,
 ParentNotDirectoryException, UnsupportedFileSystemException,
 UnresolvedLinkException, IOException {
+  Preconditions.checkNotNull(f, "File cannot be null.");
+  if (InodeTree.SlashPath.equals(f)) {
+throw new FileAlreadyExistsException(
+"/ is not a file. The

[hadoop] branch branch-3.1 updated: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable (#2100)

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new c4e0098  HDFS-15436. Default mount table name used by ViewFileSystem 
should be configurable (#2100)
c4e0098 is described below

commit c4e0098bd6054421397e33bd64239bdd13bd307b
Author: Virajith Jalaparti 
AuthorDate: Fri Jun 26 13:19:16 2020 -0700

HDFS-15436. Default mount table name used by ViewFileSystem should be 
configurable (#2100)

* HDFS-15436. Default mount table name used by ViewFileSystem should be 
configurable

* Replace Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE use in tests

* Address Uma's comments on PR#2100

* Sort lists in test to match without concern to order

* Address comments, fix checkstyle and fix failing tests

* Fix checkstyle

(cherry picked from commit bed0a3a37404e9defda13a5bffe5609e72466e46)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java| 33 +++
 .../org/apache/hadoop/fs/viewfs/Constants.java | 10 +++-
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  2 +-
 .../fs/viewfs/TestViewFsWithAuthorityLocalFs.java  |  5 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 10 +++-
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  2 +-
 .../src/site/markdown/ViewFsOverloadScheme.md  | 33 +++
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java   |  6 +-
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 68 --
 9 files changed, 141 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 6dd1f65..7d29b8f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -66,8 +66,7 @@ public class ConfigUtil {
*/
   public static void addLink(final Configuration conf, final String src,
   final URI target) {
-addLink( conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, 
-src, target);   
+addLink(conf, getDefaultMountTableName(conf), src, target);
   }
 
   /**
@@ -88,8 +87,7 @@ public class ConfigUtil {
* @param target
*/
   public static void addLinkMergeSlash(Configuration conf, final URI target) {
-addLinkMergeSlash(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
-target);
+addLinkMergeSlash(conf, getDefaultMountTableName(conf), target);
   }
 
   /**
@@ -110,8 +108,7 @@ public class ConfigUtil {
* @param target
*/
   public static void addLinkFallback(Configuration conf, final URI target) {
-addLinkFallback(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
-target);
+addLinkFallback(conf, getDefaultMountTableName(conf), target);
   }
 
   /**
@@ -132,7 +129,7 @@ public class ConfigUtil {
* @param targets
*/
   public static void addLinkMerge(Configuration conf, final URI[] targets) {
-addLinkMerge(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, targets);
+addLinkMerge(conf, getDefaultMountTableName(conf), targets);
   }
 
   /**
@@ -166,8 +163,7 @@ public class ConfigUtil {
 
   public static void addLinkNfly(final Configuration conf, final String src,
   final URI ... targets) {
-addLinkNfly(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, src, null,
-targets);
+addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
   /**
@@ -177,8 +173,7 @@ public class ConfigUtil {
*/
   public static void setHomeDirConf(final Configuration conf,
   final String homedir) {
-setHomeDirConf(  conf,
-Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,   homedir);
+setHomeDirConf(conf, getDefaultMountTableName(conf), homedir);
   }
   
   /**
@@ -202,7 +197,7 @@ public class ConfigUtil {
* @return home dir value, null if variable is not in conf
*/
   public static String getHomeDirValue(final Configuration conf) {
-return getHomeDirValue(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE);
+return getHomeDirValue(conf, getDefaultMountTableName(conf));
   }
   
   /**
@@ -216,4 +211,18 @@ public class ConfigUtil {
 return conf.get(getConfigViewFsPrefix(mountTableName) + "." +
 Constants.CONFIG_VIEWFS_HOMEDIR);
   }
+
+  /**
+   * Get the name of the default mount table to use. If
+   * {@link Constants#CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY} is specified,
+   * it's value is returned. Otherwise,
+   * {@link Constants#CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE} is returned.
+   *
+   * @param conf Configuration to use.
+   * @return the name of the default mount table to use.
+   */
+  public sta

[hadoop] branch branch-3.2 updated: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable (#2100)

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 704b273  HDFS-15436. Default mount table name used by ViewFileSystem 
should be configurable (#2100)
704b273 is described below

commit 704b273ec86d9387bfbc7b316a4880a7561198a7
Author: Virajith Jalaparti 
AuthorDate: Fri Jun 26 13:19:16 2020 -0700

HDFS-15436. Default mount table name used by ViewFileSystem should be 
configurable (#2100)

* HDFS-15436. Default mount table name used by ViewFileSystem should be 
configurable

* Replace Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE use in tests

* Address Uma's comments on PR#2100

* Sort lists in test to match without concern to order

* Address comments, fix checkstyle and fix failing tests

* Fix checkstyle

(cherry picked from commit bed0a3a37404e9defda13a5bffe5609e72466e46)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java| 33 +++
 .../org/apache/hadoop/fs/viewfs/Constants.java | 10 +++-
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  2 +-
 .../fs/viewfs/TestViewFsWithAuthorityLocalFs.java  |  5 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 10 +++-
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  2 +-
 .../src/site/markdown/ViewFsOverloadScheme.md  | 33 +++
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java   |  6 +-
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 68 --
 9 files changed, 141 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 6dd1f65..7d29b8f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -66,8 +66,7 @@ public class ConfigUtil {
*/
   public static void addLink(final Configuration conf, final String src,
   final URI target) {
-addLink( conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, 
-src, target);   
+addLink(conf, getDefaultMountTableName(conf), src, target);
   }
 
   /**
@@ -88,8 +87,7 @@ public class ConfigUtil {
* @param target
*/
   public static void addLinkMergeSlash(Configuration conf, final URI target) {
-addLinkMergeSlash(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
-target);
+addLinkMergeSlash(conf, getDefaultMountTableName(conf), target);
   }
 
   /**
@@ -110,8 +108,7 @@ public class ConfigUtil {
* @param target
*/
   public static void addLinkFallback(Configuration conf, final URI target) {
-addLinkFallback(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
-target);
+addLinkFallback(conf, getDefaultMountTableName(conf), target);
   }
 
   /**
@@ -132,7 +129,7 @@ public class ConfigUtil {
* @param targets
*/
   public static void addLinkMerge(Configuration conf, final URI[] targets) {
-addLinkMerge(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, targets);
+addLinkMerge(conf, getDefaultMountTableName(conf), targets);
   }
 
   /**
@@ -166,8 +163,7 @@ public class ConfigUtil {
 
   public static void addLinkNfly(final Configuration conf, final String src,
   final URI ... targets) {
-addLinkNfly(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, src, null,
-targets);
+addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
   /**
@@ -177,8 +173,7 @@ public class ConfigUtil {
*/
   public static void setHomeDirConf(final Configuration conf,
   final String homedir) {
-setHomeDirConf(  conf,
-Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,   homedir);
+setHomeDirConf(conf, getDefaultMountTableName(conf), homedir);
   }
   
   /**
@@ -202,7 +197,7 @@ public class ConfigUtil {
* @return home dir value, null if variable is not in conf
*/
   public static String getHomeDirValue(final Configuration conf) {
-return getHomeDirValue(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE);
+return getHomeDirValue(conf, getDefaultMountTableName(conf));
   }
   
   /**
@@ -216,4 +211,18 @@ public class ConfigUtil {
 return conf.get(getConfigViewFsPrefix(mountTableName) + "." +
 Constants.CONFIG_VIEWFS_HOMEDIR);
   }
+
+  /**
+   * Get the name of the default mount table to use. If
+   * {@link Constants#CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY} is specified,
+   * it's value is returned. Otherwise,
+   * {@link Constants#CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE} is returned.
+   *
+   * @param conf Configuration to use.
+   * @return the name of the default mount table to use.
+   */
+  public sta

[hadoop] branch branch-3.3 updated: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable (#2100)

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ea97fe2  HDFS-15436. Default mount table name used by ViewFileSystem 
should be configurable (#2100)
ea97fe2 is described below

commit ea97fe250c7df1bfe7590b823b2b76cae8454e6b
Author: Virajith Jalaparti 
AuthorDate: Fri Jun 26 13:19:16 2020 -0700

HDFS-15436. Default mount table name used by ViewFileSystem should be 
configurable (#2100)

* HDFS-15436. Default mount table name used by ViewFileSystem should be 
configurable

* Replace Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE use in tests

* Address Uma's comments on PR#2100

* Sort lists in test to match without concern to order

* Address comments, fix checkstyle and fix failing tests

* Fix checkstyle

(cherry picked from commit bed0a3a37404e9defda13a5bffe5609e72466e46)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java| 33 +++
 .../org/apache/hadoop/fs/viewfs/Constants.java | 10 +++-
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  2 +-
 .../fs/viewfs/TestViewFsWithAuthorityLocalFs.java  |  5 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 10 +++-
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  2 +-
 .../src/site/markdown/ViewFsOverloadScheme.md  | 33 +++
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java   |  6 +-
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 68 --
 9 files changed, 141 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 6dd1f65..7d29b8f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -66,8 +66,7 @@ public class ConfigUtil {
*/
   public static void addLink(final Configuration conf, final String src,
   final URI target) {
-addLink( conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, 
-src, target);   
+addLink(conf, getDefaultMountTableName(conf), src, target);
   }
 
   /**
@@ -88,8 +87,7 @@ public class ConfigUtil {
* @param target
*/
   public static void addLinkMergeSlash(Configuration conf, final URI target) {
-addLinkMergeSlash(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
-target);
+addLinkMergeSlash(conf, getDefaultMountTableName(conf), target);
   }
 
   /**
@@ -110,8 +108,7 @@ public class ConfigUtil {
* @param target
*/
   public static void addLinkFallback(Configuration conf, final URI target) {
-addLinkFallback(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
-target);
+addLinkFallback(conf, getDefaultMountTableName(conf), target);
   }
 
   /**
@@ -132,7 +129,7 @@ public class ConfigUtil {
* @param targets
*/
   public static void addLinkMerge(Configuration conf, final URI[] targets) {
-addLinkMerge(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, targets);
+addLinkMerge(conf, getDefaultMountTableName(conf), targets);
   }
 
   /**
@@ -166,8 +163,7 @@ public class ConfigUtil {
 
   public static void addLinkNfly(final Configuration conf, final String src,
   final URI ... targets) {
-addLinkNfly(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, src, null,
-targets);
+addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
   /**
@@ -177,8 +173,7 @@ public class ConfigUtil {
*/
   public static void setHomeDirConf(final Configuration conf,
   final String homedir) {
-setHomeDirConf(  conf,
-Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,   homedir);
+setHomeDirConf(conf, getDefaultMountTableName(conf), homedir);
   }
   
   /**
@@ -202,7 +197,7 @@ public class ConfigUtil {
* @return home dir value, null if variable is not in conf
*/
   public static String getHomeDirValue(final Configuration conf) {
-return getHomeDirValue(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE);
+return getHomeDirValue(conf, getDefaultMountTableName(conf));
   }
   
   /**
@@ -216,4 +211,18 @@ public class ConfigUtil {
 return conf.get(getConfigViewFsPrefix(mountTableName) + "." +
 Constants.CONFIG_VIEWFS_HOMEDIR);
   }
+
+  /**
+   * Get the name of the default mount table to use. If
+   * {@link Constants#CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE_NAME_KEY} is specified,
+   * it's value is returned. Otherwise,
+   * {@link Constants#CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE} is returned.
+   *
+   * @param conf Configuration to use.
+   * @return the name of the default mount table to use.
+   */
+  public sta

[hadoop] branch branch-3.1 updated: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 30d1d29  HDFS-15429. mkdirs should work when parent dir is an 
internalDir and fallback configured. Contributed by Uma Maheswara Rao G.
30d1d29 is described below

commit 30d1d2907643871ccaa3b0906e7af1d3f4f1e6fb
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 26 01:29:38 2020 -0700

HDFS-15429. mkdirs should work when parent dir is an internalDir and 
fallback configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit d5e1bb6155496cf9d82e121dd1b65d0072312197)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  25 ++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  28 +-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 229 +---
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 297 +
 4 files changed, 542 insertions(+), 37 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 16a5e08..c960a21 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1300,6 +1300,31 @@ public class ViewFileSystem extends FileSystem {
   dir.toString().substring(1))) {
 return true; // this is the stupid semantics of FileSystem
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+
+try {
+  return linkedFallbackFs.mkdirs(dirToCreate, permission);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg =
+new StringBuilder("Failed to create ").append(dirToCreate)
+.append(" at fallback : ")
+.append(linkedFallbackFs.getUri());
+LOG.debug(msg.toString(), e);
+  }
+  return false;
+}
+  }
+
   throw readOnlyMountTable("mkdirs",  dir);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index b10c897..770f43b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -1127,11 +1127,35 @@ public class ViewFs extends AbstractFileSystem {
 
 @Override
 public void mkdir(final Path dir, final FsPermission permission,
-final boolean createParent) throws AccessControlException,
-FileAlreadyExistsException {
+final boolean createParent) throws IOException {
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+try {
+  // We are here because, the parent dir already exist in the mount
+  // table internal tree. So, let's create parent always in fallback.
+  linkedFallbackFs.mkdir(dirToCreate, permission, true);
+  return;
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+  throw e;
+}
+  }
+
   throw readOnlyMountTable("mkdir", dir);
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/t

[hadoop] branch branch-3.1 updated: HADOOP-17060. Clarify listStatus and getFileStatus behaviors inconsistent in the case of ViewFs implementation for isDirectory. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 2087013  HADOOP-17060. Clarify listStatus and getFileStatus behaviors 
inconsistent in the case of ViewFs implementation for isDirectory. Contributed 
by Uma Maheswara Rao G.
2087013 is described below

commit 208701329c47019d1f9db8222121f83436b0affd
Author: Uma Maheswara Rao G 
AuthorDate: Wed Jun 10 15:00:02 2020 -0700

HADOOP-17060. Clarify listStatus and getFileStatus behaviors inconsistent 
in the case of ViewFs implementation for isDirectory. Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit 93b121a9717bb4ef5240fda877ebb5275f6446b4)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 36 --
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 24 +++
 .../src/main/java/org/apache/hadoop/fs/Hdfs.java   | 22 +
 .../apache/hadoop/hdfs/DistributedFileSystem.java  | 25 ---
 4 files changed, 94 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 7552c06..e2d8eac 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -486,6 +486,14 @@ public class ViewFileSystem extends FileSystem {
 : new ViewFsFileStatus(orig, qualified);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * If the given path is a symlink(mount link), the path will be resolved to a
+   * target path and it will get the resolved path's FileStatus object. It will
+   * not be represented as a symlink and isDirectory API returns true if the
+   * resolved path is a directory, false otherwise.
+   */
   @Override
   public FileStatus getFileStatus(final Path f) throws AccessControlException,
   FileNotFoundException, IOException {
@@ -503,6 +511,25 @@ public class ViewFileSystem extends FileSystem {
 res.targetFileSystem.access(res.remainingPath, mode);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * Note: listStatus on root("/") considers listing from fallbackLink if
+   * available. If the same directory name is present in configured mount path
+   * as well as in fallback link, then only the configured mount path will be
+   * listed in the returned result.
+   *
+   * If any of the the immediate children of the given path f is a 
symlink(mount
+   * link), the returned FileStatus object of that children would be 
represented
+   * as a symlink. It will not be resolved to the target path and will not get
+   * the target path FileStatus object. The target path will be available via
+   * getSymlink on that children's FileStatus object. Since it represents as
+   * symlink, isDirectory on that children's FileStatus will return false.
+   *
+   * If you want to get the FileStatus of target path for that children, you 
may
+   * want to use GetFileStatus API with that children's symlink path. Please 
see
+   * {@link ViewFileSystem#getFileStatus(Path f)}
+   */
   @Override
   public FileStatus[] listStatus(final Path f) throws AccessControlException,
   FileNotFoundException, IOException {
@@ -1135,20 +1162,11 @@ public class ViewFileSystem extends FileSystem {
   checkPathIsSlash(f);
   return new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-
   new Path(theInternalDir.fullPath).makeQualified(
   myUri, ROOT_PATH));
 }
 
 
-/**
- * {@inheritDoc}
- *
- * Note: listStatus on root("/") considers listing from fallbackLink if
- * available. If the same directory name is present in configured mount
- * path as well as in fallback link, then only the configured mount path
- * will be listed in the returned result.
- */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 8cebc76..5d06b30 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -351,6 +351,14 @@ public class ViewFs extends AbstractFileSystem {
 return res.targetFileSystem.getFileChecksum(res.remainingPath);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * If the giv

[hadoop] branch branch-3.1 updated: HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as non symlinks. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new e17770d  HDFS-15418. ViewFileSystemOverloadScheme should represent 
mount links as non symlinks. Contributed by Uma Maheswara Rao G.
e17770d is described below

commit e17770dec60a560c26d873ae3cfcb3b2f943930e
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jun 20 00:32:02 2020 -0700

HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as 
non symlinks. Contributed by Uma Maheswara Rao G.

(cherry picked from commit b27810aa6015253866ccc0ccc7247ad7024c0730)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 ++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  71 +++
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  20 +++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  80 -
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 132 +
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  42 +++
 ...SystemOverloadSchemeHdfsFileSystemContract.java |   5 +
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |   9 ++
 9 files changed, 295 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 0a5d4b4..f454f63 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -90,4 +90,12 @@ public interface Constants {
   String CONFIG_VIEWFS_ENABLE_INNER_CACHE = "fs.viewfs.enable.inner.cache";
 
   boolean CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT = true;
+
+  /**
+   * Enable ViewFileSystem to show mountlinks as symlinks.
+   */
+  String CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS =
+  "fs.viewfs.mount.links.as.symlinks";
+
+  boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e2d8eac..a1fd14b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.viewfs;
 
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
 
 import java.io.FileNotFoundException;
@@ -525,10 +527,18 @@ public class ViewFileSystem extends FileSystem {
* the target path FileStatus object. The target path will be available via
* getSymlink on that children's FileStatus object. Since it represents as
* symlink, isDirectory on that children's FileStatus will return false.
+   * This behavior can be changed by setting an advanced configuration
+   * fs.viewfs.mount.links.as.symlinks to false. In this case, mount points 
will
+   * be represented as non-symlinks and all the file/directory attributes like
+   * permissions, isDirectory etc will be assigned from it's resolved target
+   * directory/file.
*
* If you want to get the FileStatus of target path for that children, you 
may
* want to use GetFileStatus API with that children's symlink path. Please 
see
* {@link ViewFileSystem#getFileStatus(Path f)}
+   *
+   * Note: In ViewFileSystem, by default the mount links are represented as
+   * symlinks.
*/
   @Override
   public FileStatus[] listStatus(final Path f) throws AccessControlException,
@@ -1075,6 +1085,7 @@ public class ViewFileSystem extends FileSystem {
 final long creationTime; // of the the mount table
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
+private final boolean showMountLinksAsSymlinks;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
@@ -1088,6 +1099,9 @@ public class ViewFileSystem extends FileSystem {
   theInternalDir = dir;
   creationTime = cTime;
   this.ugi = ugi;
+  showMountLinksAsSymlinks = config
+  .getBoolean(CONFIG_VIEWFS_MOU

[hadoop] branch branch-3.1 updated: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 81e6261  HDFS-15427. Merged ListStatus with Fallback target filesystem 
and InternalDirViewFS. Contributed by Uma Maheswara Rao G.
81e6261 is described below

commit 81e62613fbf7d6185a69f3ff26dd0121b9318d68
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 23 01:42:25 2020 -0700

HDFS-15427. Merged ListStatus with Fallback target filesystem and 
InternalDirViewFS. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 7c02d1889bbeabc73c95a4c83f0cd204365ff410)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  89 
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  94 +---
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 251 -
 4 files changed, 360 insertions(+), 78 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 50c839b..d1e5d3a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -374,7 +374,7 @@ abstract class InodeTree {
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(INodeDir dir)
-  throws URISyntaxException;
+  throws URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(String settings, URI[] mergeFsURIs)
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
@@ -393,7 +393,7 @@ abstract class InodeTree {
 return rootFallbackLink != null;
   }
 
-  private INodeLink getRootFallbackLink() {
+  protected INodeLink getRootFallbackLink() {
 Preconditions.checkState(root.isInternalDir());
 return rootFallbackLink;
   }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index a1fd14b..16a5e08 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -288,8 +288,9 @@ public class ViewFileSystem extends FileSystem {
 
 @Override
 protected FileSystem getTargetFileSystem(final INodeDir 
dir)
-  throws URISyntaxException {
-  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, 
config);
+throws URISyntaxException {
+  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, config,
+  this);
 }
 
 @Override
@@ -516,10 +517,10 @@ public class ViewFileSystem extends FileSystem {
   /**
* {@inheritDoc}
*
-   * Note: listStatus on root("/") considers listing from fallbackLink if
-   * available. If the same directory name is present in configured mount path
-   * as well as in fallback link, then only the configured mount path will be
-   * listed in the returned result.
+   * Note: listStatus considers listing from fallbackLink if available. If the
+   * same directory path is present in configured mount path as well as in
+   * fallback fs, then only the fallback path will be listed in the returned
+   * result except for link.
*
* If any of the the immediate children of the given path f is a 
symlink(mount
* link), the returned FileStatus object of that children would be 
represented
@@ -1086,11 +1087,13 @@ public class ViewFileSystem extends FileSystem {
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
 private final boolean showMountLinksAsSymlinks;
+private InodeTree fsState;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
-Configuration config) throws URISyntaxException {
+Configuration config, InodeTree fsState) throws URISyntaxException {
   myUri = uri;
+  this.fsState = fsState;
   try {
 initialize(myUri, config);
   } catch (IOException e) {
@@ -1186,7 +1189,8 @@ public class ViewFileSystem extends FileSystem {
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
   FileStatus[] fallbackStatuses = listStatusForFallbackLink();
-  FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
+  Set linkStatuses = new HashSet<>();
+  Set internalDirStatus

[hadoop] branch branch-3.1 updated: HADOOP-17029. Return correct permission and owner for listing on internal directories in ViewFs. Contributed by Abhishek Das.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 0e9b6b0  HADOOP-17029. Return correct permission and owner for listing 
on internal directories in ViewFs. Contributed by Abhishek Das.
0e9b6b0 is described below

commit 0e9b6b0124cf0c2b314e3ede301bb14ccf539cc1
Author: Abhishek Das 
AuthorDate: Fri Jun 5 14:56:51 2020 -0700

HADOOP-17029. Return correct permission and owner for listing on internal 
directories in ViewFs. Contributed by Abhishek Das.

(cherry picked from commit e7dd02768b658b2a1f216fbedc65938d9b6ca6e9)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  27 +++--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  41 +--
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java | 118 -
 3 files changed, 146 insertions(+), 40 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index bdf429e..a19366e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1161,13 +1161,26 @@ public class ViewFileSystem extends FileSystem {
 INode inode = iEntry.getValue();
 if (inode.isLink()) {
   INodeLink link = (INodeLink) inode;
-
-  result[i++] = new FileStatus(0, false, 0, 0,
-creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-link.getTargetLink(),
-new Path(inode.fullPath).makeQualified(
-myUri, null));
+  try {
+String linkedPath = link.getTargetFileSystem().getUri().getPath();
+FileStatus status =
+((ChRootedFileSystem)link.getTargetFileSystem())
+.getMyFs().getFileStatus(new Path(linkedPath));
+result[i++] = new FileStatus(status.getLen(), false,
+  status.getReplication(), status.getBlockSize(),
+  status.getModificationTime(), status.getAccessTime(),
+  status.getPermission(), status.getOwner(), status.getGroup(),
+  link.getTargetLink(),
+  new Path(inode.fullPath).makeQualified(
+  myUri, null));
+  } catch (FileNotFoundException ex) {
+result[i++] = new FileStatus(0, false, 0, 0,
+  creationTime, creationTime, PERMISSION_555,
+  ugi.getShortUserName(), ugi.getPrimaryGroupName(),
+  link.getTargetLink(),
+  new Path(inode.fullPath).makeQualified(
+  myUri, null));
+  }
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index dde6649..8cebc76 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -910,11 +910,25 @@ public class ViewFs extends AbstractFileSystem {
   if (inode.isLink()) {
 INodeLink inodelink = 
   (INodeLink) inode;
-result = new FileStatus(0, false, 0, 0, creationTime, creationTime,
+try {
+  String linkedPath = inodelink.getTargetFileSystem()
+  .getUri().getPath();
+  FileStatus status = ((ChRootedFs)inodelink.getTargetFileSystem())
+  .getMyFs().getFileStatus(new Path(linkedPath));
+  result = new FileStatus(status.getLen(), false,
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(), status.getGroup(),
+inodelink.getTargetLink(),
+new Path(inode.fullPath).makeQualified(
+myUri, null));
+} catch (FileNotFoundException ex) {
+  result = new FileStatus(0, false, 0, 0, creationTime, creationTime,
 PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 inodelink.getTargetLink(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
+}
   } else {
 result = new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
@@ -969,12 +983,25 @@ public class ViewFs extends

[hadoop] branch branch-3.1 updated: HDFS-15396. Fix TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. Contributed by Ayush Saxena.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new b720d77  HDFS-15396. Fix 
TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. 
Contributed by Ayush Saxena.
b720d77 is described below

commit b720d7770d93e67afe05beb3943913da241696d2
Author: Ayush Saxena 
AuthorDate: Mon Jun 8 01:59:10 2020 +0530

HDFS-15396. Fix 
TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. 
Contributed by Ayush Saxena.

(cherry picked from commit a8610c15c498531bf3c011f1b0ace8ef61f2)
---
 .../src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java  | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index a19366e..7552c06 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1163,6 +1163,9 @@ public class ViewFileSystem extends FileSystem {
   INodeLink link = (INodeLink) inode;
   try {
 String linkedPath = link.getTargetFileSystem().getUri().getPath();
+if("".equals(linkedPath)) {
+  linkedPath = "/";
+}
 FileStatus status =
 ((ChRootedFileSystem)link.getTargetFileSystem())
 .getMyFs().getFileStatus(new Path(linkedPath));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 100c139  HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme 
in processPath. Contributed by Uma Maheswara Rao G.
100c139 is described below

commit 100c13967ea713b473e0742e3e40c17e8de62147
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 12 14:32:19 2020 -0700

HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 785b1def959fab6b8b766410bcd240feee13)
(cherry picked from commit 120ee793fc4bcbf9d1945d5e38e3ad5b2b290a0e)
---
 .../java/org/apache/hadoop/fs/shell/FsUsage.java   |   3 +-
 .../hadoop/fs/viewfs/ViewFileSystemUtil.java   |  14 +-
 ...ViewFileSystemOverloadSchemeWithFSCommands.java | 173 +
 3 files changed, 188 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
index 6596527..64aade3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
@@ -128,7 +128,8 @@ class FsUsage extends FsCommand {
 
 @Override
 protected void processPath(PathData item) throws IOException {
-  if (ViewFileSystemUtil.isViewFileSystem(item.fs)) {
+  if (ViewFileSystemUtil.isViewFileSystem(item.fs)
+  || ViewFileSystemUtil.isViewFileSystemOverloadScheme(item.fs)) {
 ViewFileSystem viewFileSystem = (ViewFileSystem) item.fs;
 Map  fsStatusMap =
 ViewFileSystemUtil.getStatus(viewFileSystem, item.path);
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
index c8a1d78..f486a10 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
@@ -52,6 +52,17 @@ public final class ViewFileSystemUtil {
   }
 
   /**
+   * Check if the FileSystem is a ViewFileSystemOverloadScheme.
+   *
+   * @param fileSystem
+   * @return true if the fileSystem is ViewFileSystemOverloadScheme
+   */
+  public static boolean isViewFileSystemOverloadScheme(
+  final FileSystem fileSystem) {
+return fileSystem instanceof ViewFileSystemOverloadScheme;
+  }
+
+  /**
* Get FsStatus for all ViewFsMountPoints matching path for the given
* ViewFileSystem.
*
@@ -93,7 +104,8 @@ public final class ViewFileSystemUtil {
*/
   public static Map getStatus(
   FileSystem fileSystem, Path path) throws IOException {
-if (!isViewFileSystem(fileSystem)) {
+if (!(isViewFileSystem(fileSystem)
+|| isViewFileSystemOverloadScheme(fileSystem))) {
   throw new UnsupportedFileSystemException("FileSystem '"
   + fileSystem.getUri() + "'is not a ViewFileSystem.");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
new file mode 100644
index 000..a974377
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
@@ -0,0 +1,173 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.net.URI;
+import java.net.UR

[hadoop] branch branch-3.1 updated: HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 64dbb39  HDFS-15330. Document the ViewFSOverloadScheme details in 
ViewFS guide. Contributed by Uma Maheswara Rao G.
64dbb39 is described below

commit 64dbb39e7134cc59750502949d16043bcc50288d
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 5 10:58:21 2020 -0700

HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 76fa0222f0d2e2d92b4a1eedba8b3e38002e8c23)
(cherry picked from commit 418580446b65be3a0674762e76fc2cb9a1e5629a)
---
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |  40 -
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|   6 +
 .../src/site/markdown/ViewFsOverloadScheme.md  | 163 +
 .../site/resources/images/ViewFSOverloadScheme.png | Bin 0 -> 190004 bytes
 hadoop-project/src/site/site.xml   |   1 +
 5 files changed, 209 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index c8a9184..b545e9b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -670,4 +670,42 @@ Usage: `hdfs debug recoverLease -path  [-retries 
]`
 | [`-path` *path*] | HDFS path for which to recover the lease. |
 | [`-retries` *num-retries*] | Number of times the client will retry calling 
recoverLease. The default number of retries is 1. |
 
-Recover the lease on the specified path. The path must reside on an HDFS 
filesystem. The default number of retries is 1.
+Recover the lease on the specified path. The path must reside on an HDFS file 
system. The default number of retries is 1.
+
+dfsadmin with ViewFsOverloadScheme
+--
+
+Usage: `hdfs dfsadmin -fs  `
+
+| COMMAND\_OPTION | Description |
+|: |: |
+| `-fs` *child fs mount link URI* | Its a logical mount link path to child 
file system in ViewFS world. This uri typically formed as src mount link 
prefixed with fs.defaultFS. Please note, this is not an actual child file 
system uri, instead its a logical mount link uri pointing to actual child file 
system|
+
+Example command usage:
+   `hdfs dfsadmin -fs hdfs://nn1 -safemode enter`
+
+In ViewFsOverloadScheme, we may have multiple child file systems as mount 
point mappings as shown in [ViewFsOverloadScheme 
Guide](./ViewFsOverloadScheme.html). Here -fs option is an optional generic 
parameter supported by dfsadmin. When users want to execute commands on one of 
the child file system, they need to pass that file system mount mapping link 
uri to -fs option. Let's take an example mount link configuration and dfsadmin 
command below.
+
+Mount link:
+
+```xml
+
+  fs.defaultFS
+  hdfs://MyCluster1
+
+
+
+  fs.viewfs.mounttable.MyCluster1./user
+  hdfs://MyCluster2/user
+   hdfs://MyCluster2/user
+   mount link path: /user
+   mount link uri: hdfs://MyCluster1/user
+   mount target uri for /user: hdfs://MyCluster2/user -->
+
+```
+
+If user wants to talk to `hdfs://MyCluster2/`, then they can pass -fs option 
(`-fs hdfs://MyCluster1/user`)
+Since /user was mapped to a cluster `hdfs://MyCluster2/user`, dfsadmin resolve 
the passed (`-fs hdfs://MyCluster1/user`) to target fs 
(`hdfs://MyCluster2/user`).
+This way users can get the access to all hdfs child file systems in 
ViewFsOverloadScheme.
+If there is no `-fs` option provided, then it will try to connect to the 
configured fs.defaultFS cluster if a cluster running with the fs.defaultFS uri.
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
index f851ef6..52ad49c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
@@ -361,6 +361,12 @@ resume its work, it's a good idea to provision some sort 
of cron job to purge su
 
 Delegation tokens for the cluster to which you are submitting the job 
(including all mounted volumes for that cluster’s mount table), and for input 
and output paths to your map-reduce job (including all volumes mounted via 
mount tables for the specified input and output paths) are all handled 
automatically. In addition, there is a way to add additional delegation tokens 
to the base cluster configuration for special circumstances.
 
+Don't want to change scheme or difficult to copy mount-table configurations to 
all clients?
+---
+
+Please refer to the [View File System Overload Scheme 

[hadoop] branch branch-3.1 updated: HDFS-15389. DFSAdmin should close filesystem and dfsadmin -setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by Ayush Saxena

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new f7e590a  HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena
f7e590a is described below

commit f7e590aaffd03ed4308a7a90f862e876ea89f641
Author: Ayush Saxena 
AuthorDate: Sat Jun 6 10:49:38 2020 +0530

HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena

(cherry picked from commit cc671b16f7b0b7c1ed7b41b96171653dc43cf670)
(cherry picked from commit bee2846bee4ae676bdc14585f8a3927a9dd7df37)
---
 .../java/org/apache/hadoop/hdfs/tools/DFSAdmin.java  | 13 +++--
 ...TestViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
index ab243f3..163c147 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
@@ -479,9 +479,9 @@ public class DFSAdmin extends FsShell {
   public DFSAdmin(Configuration conf) {
 super(conf);
   }
-  
+
   protected DistributedFileSystem getDFS() throws IOException {
-return AdminHelper.getDFS(getConf());
+return AdminHelper.checkAndGetDFS(getFS(), getConf());
   }
   
   /**
@@ -1036,14 +1036,7 @@ public class DFSAdmin extends FsShell {
   System.err.println("Bandwidth should be a non-negative integer");
   return exitCode;
 }
-
-FileSystem fs = getFS();
-if (!(fs instanceof DistributedFileSystem)) {
-  System.err.println("FileSystem is " + fs.getUri());
-  return exitCode;
-}
-
-DistributedFileSystem dfs = (DistributedFileSystem) fs;
+DistributedFileSystem dfs = getDFS();
 try{
   dfs.setBalancerBandwidth(bandwidth);
   System.out.println("Balancer bandwidth is set to " + bandwidth);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
index 1961dc2..a9475dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
@@ -263,4 +263,24 @@ public class TestViewFileSystemOverloadSchemeWithDFSAdmin {
 assertOutMsg("Disallowing snapshot on / succeeded", 1);
 assertEquals(0, ret);
   }
+
+  /**
+   * Tests setBalancerBandwidth with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testSetBalancerBandwidth() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+final DFSAdmin dfsAdmin = new DFSAdmin(conf);
+redirectStream();
+int ret = ToolRunner.run(dfsAdmin,
+new String[] {"-fs", defaultFSURI.toString(), "-setBalancerBandwidth",
+"1000"});
+assertOutMsg("Balancer bandwidth is set to 1000", 0);
+assertEquals(0, ret);
+  }
 }
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 2e1dfc1  HDFS-15321. Make DFSAdmin tool to work with 
ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.
2e1dfc1 is described below

commit 2e1dfc152b36d628c17c66de11741e8828312bd4
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 2 11:09:26 2020 -0700

HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit ed83c865dd0b4e92f3f89f79543acc23792bb69c)
(cherry picked from commit 0b5e202614f0bc20a0db6656f924fa4d2741d00c)
---
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  29 +++
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |   2 +-
 .../org/apache/hadoop/hdfs/tools/AdminHelper.java  |  25 +-
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java |  13 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 266 +
 5 files changed, 317 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index f5952d5..36f9cd1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.fs.viewfs;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
@@ -27,6 +28,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 
 /**
@@ -227,4 +229,31 @@ public class ViewFileSystemOverloadScheme extends 
ViewFileSystem {
 
   }
 
+  /**
+   * This is an admin only API to give access to its child raw file system, if
+   * the path is link. If the given path is an internal directory(path is from
+   * mount paths tree), it will initialize the file system of given path uri
+   * directly. If path cannot be resolved to any internal directory or link, it
+   * will throw NotInMountpointException. Please note, this API will not return
+   * chrooted file system. Instead, this API will get actual raw file system
+   * instances.
+   *
+   * @param path - fs uri path
+   * @param conf - configuration
+   * @throws IOException
+   */
+  public FileSystem getRawFileSystem(Path path, Configuration conf)
+  throws IOException {
+InodeTree.ResolveResult res;
+try {
+  res = fsState.resolve(getUriPath(path), true);
+  return res.isInternalDir() ? fsGetter().get(path.toUri(), conf)
+  : ((ChRootedFileSystem) res.targetFileSystem).getMyFs();
+} catch (FileNotFoundException e) {
+  // No link configured with passed path.
+  throw new NotInMountpointException(path,
+  "No link found for the given path.");
+}
+  }
+
 }
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
index f051c9c..efced73 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
@@ -192,7 +192,7 @@ public class ViewFsTestSetup {
* Adds the given mount links to the configuration. Mount link mappings are
* in sources, targets at their respective index locations.
*/
-  static void addMountLinksToConf(String mountTable, String[] sources,
+  public static void addMountLinksToConf(String mountTable, String[] sources,
   String[] targets, Configuration config) throws URISyntaxException {
 for (int i = 0; i < sources.length; i++) {
   String src = sources[i];
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
index 9cb646b..27cdf70 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
@@ -1,4 +1,5 @@
 /**
+

[hadoop] branch branch-3.1 updated: HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 6ae9296  HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's 
scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.
6ae9296 is described below

commit 6ae92962d9d03d09b67a373831c2d3b53948c497
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 21 21:34:58 2020 -0700

HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and 
target uris schemes are same. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 4734c77b4b64b7c6432da4cc32881aba85f94ea1)
(cherry picked from commit 8e71e85af70c17f2350f794f8bc2475eb1e3acea)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  15 ++-
 .../java/org/apache/hadoop/fs/viewfs/FsGetter.java |  47 
 .../fs/viewfs/HCFSMountTableConfigLoader.java  |   3 +-
 .../org/apache/hadoop/fs/viewfs/NflyFSystem.java   |  29 -
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  24 +---
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |   1 -
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  28 -
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 121 +
 8 files changed, 230 insertions(+), 38 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 4c3dae9..6dd1f65 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -136,6 +136,17 @@ public class ConfigUtil {
   }
 
   /**
+   * Add nfly link to configuration for the given mount table.
+   */
+  public static void addLinkNfly(Configuration conf, String mountTableName,
+  String src, String settings, final String targets) {
+conf.set(
+getConfigViewFsPrefix(mountTableName) + "."
++ Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+targets);
+  }
+
+  /**
*
* @param conf
* @param mountTableName
@@ -149,9 +160,7 @@ public class ConfigUtil {
 settings = settings == null
 ? "minReplication=2,repairOnRead=true"
 : settings;
-
-conf.set(getConfigViewFsPrefix(mountTableName) + "." +
-Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+addLinkNfly(conf, mountTableName, src, settings,
 StringUtils.uriToString(targets));
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
new file mode 100644
index 000..071af11
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+/**
+ * File system instance getter.
+ */
+@Private
+class FsGetter {
+
+  /**
+   * Gets new file system instance of given uri.
+   */
+  public FileSystem getNewInstance(URI uri, Configuration conf)
+  throws IOException {
+return FileSystem.newInstance(uri, conf);
+  }
+
+  /**
+   * Gets file system instance of given uri.
+   */
+  public FileSystem get(URI uri, Configuration conf) throws IOException {
+return FileSystem.get(uri, conf);
+  }
+}
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs

[hadoop] branch branch-3.1 updated: HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root). Contributed by Abhishek Das.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 7015589  HADOOP-17024. ListStatus on ViewFS root (ls "/") should list 
the linkFallBack root (configured target root). Contributed by Abhishek Das.
7015589 is described below

commit 7015589f5845d2732a8b7ba80af9d40187ad5167
Author: Abhishek Das 
AuthorDate: Mon May 18 22:27:12 2020 -0700

HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the 
linkFallBack root (configured target root). Contributed by Abhishek Das.

(cherry picked from commit ce4ec7445345eb94c6741d416814a4eac319f0a6)
(cherry picked from commit 5b248de42d2ae42710531a1514a21d60a1fca4b2)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 49 ++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 51 ++-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 98 ++
 4 files changed, 209 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 6992343..50c839b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -123,6 +123,7 @@ abstract class InodeTree {
 private final Map> children = new HashMap<>();
 private T internalDirFs =  null; //filesystem of this internal directory
 private boolean isRoot = false;
+private INodeLink fallbackLink = null;
 
 INodeDir(final String pathToNode, final UserGroupInformation aUgi) {
   super(pathToNode, aUgi);
@@ -149,6 +150,17 @@ abstract class InodeTree {
   return isRoot;
 }
 
+INodeLink getFallbackLink() {
+  return fallbackLink;
+}
+
+void addFallbackLink(INodeLink link) throws IOException {
+  if (!isRoot) {
+throw new IOException("Fallback link can only be added for root");
+  }
+  this.fallbackLink = link;
+}
+
 Map> getChildren() {
   return Collections.unmodifiableMap(children);
 }
@@ -580,6 +592,7 @@ abstract class InodeTree {
 }
   }
   rootFallbackLink = fallbackLink;
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
 
 if (!gotMountTableEntry) {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index a13b6ea..f626ffe 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1161,10 +1161,19 @@ public class ViewFileSystem extends FileSystem {
 }
 
 
+/**
+ * {@inheritDoc}
+ *
+ * Note: listStatus on root("/") considers listing from fallbackLink if
+ * available. If the same directory name is present in configured mount
+ * path as well as in fallback link, then only the configured mount path
+ * will be listed in the returned result.
+ */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
+  FileStatus[] fallbackStatuses = listStatusForFallbackLink();
   FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
   int i = 0;
   for (Entry> iEntry :
@@ -1187,7 +1196,45 @@ public class ViewFileSystem extends FileSystem {
 myUri, null));
 }
   }
-  return result;
+  if (fallbackStatuses.length > 0) {
+return consolidateFileStatuses(fallbackStatuses, result);
+  } else {
+return result;
+  }
+}
+
+private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
+FileStatus[] mountPointStatuses) {
+  ArrayList result = new ArrayList<>();
+  Set pathSet = new HashSet<>();
+  for (FileStatus status : mountPointStatuses) {
+result.add(status);
+pathSet.add(status.getPath().getName());
+  }
+  for (FileStatus status : fallbackStatuses) {
+if (!pathSet.contains(status.getPath().getName())) {
+  result.add(status);
+}
+  }
+  return result.toArray(new FileStatus[0]);
+}
+
+private FileStatus[] listStatusForFallbackLink() throws IOException {
+  if (theInternalDir.isRoot() &&
+  theInternalDir.getFallbackLink() != null

[hadoop] branch branch-3.1 updated: HDFS-15306. Make mount-table to read from central place ( Let's say from HDFS). Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new c4857bb  HDFS-15306. Make mount-table to read from central place ( 
Let's say from HDFS). Contributed by Uma Maheswara Rao G.
c4857bb is described below

commit c4857bb9c1bd5c937cdc658192172d09e56609bc
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 14 17:29:35 2020 -0700

HDFS-15306. Make mount-table to read from central place ( Let's say from 
HDFS). Contributed by Uma Maheswara Rao G.

(cherry picked from commit ac4a2e11d98827c7926a34cda27aa7bcfd3f36c1)
(cherry picked from commit 544996c85702af7ae241ef2f18e2597e2b4050be)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   5 +
 .../fs/viewfs/HCFSMountTableConfigLoader.java  | 122 ++
 .../hadoop/fs/viewfs/MountTableConfigLoader.java   |  44 +
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 180 -
 .../org/apache/hadoop/fs/viewfs/package-info.java  |  26 +++
 .../fs/viewfs/TestHCFSMountTableConfigLoader.java  | 165 +++
 ...iewFSOverloadSchemeCentralMountTableConfig.java |  77 +
 ...iewFileSystemOverloadSchemeLocalFileSystem.java |  47 --
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  71 +++-
 ...FSOverloadSchemeWithMountTableConfigInHDFS.java |  68 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 125 +-
 11 files changed, 797 insertions(+), 133 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 37f1a16..0a5d4b4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -30,6 +30,11 @@ public interface Constants {
* Prefix for the config variable prefix for the ViewFs mount-table
*/
   public static final String CONFIG_VIEWFS_PREFIX = "fs.viewfs.mounttable";
+
+  /**
+   * Prefix for the config variable for the ViewFs mount-table path.
+   */
+  String CONFIG_VIEWFS_MOUNTTABLE_PATH = CONFIG_VIEWFS_PREFIX + ".path";
  
   /**
* Prefix for the home dir for the mount table - if not specified
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
new file mode 100644
index 000..c7e5aab
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation for Apache Hadoop compatible file system based mount-table
+ * file loading.
+ */
+public class HCFSMountTableConfigLoader implements MountTableConfigLoader {
+  private static final String REGEX_DOT = "[.]";
+  private static final Logger LOGGER =
+  LoggerFactory.getLogger(HCFSMountTableConfigLoader.class);
+  private Path mountTable = null;
+
+  /**
+   * Loads the mount-table configuration from hadoop compatible file system and
+   * add the configuration items to given configuration. Mount-table
+   * configuration format should be suffixed with version number.
+   * Format: mount-table..xml
+   * Example: mount-table.1.xml
+   * When user wants to update mount-table, the expectation is to upload new
+   * mount-table configuration file with monoto

[hadoop] branch branch-3.2 updated: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 80111fe  HDFS-15427. Merged ListStatus with Fallback target filesystem 
and InternalDirViewFS. Contributed by Uma Maheswara Rao G.
80111fe is described below

commit 80111fe5bb65fc3d6276c007747e8c1c74d636e7
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 23 01:42:25 2020 -0700

HDFS-15427. Merged ListStatus with Fallback target filesystem and 
InternalDirViewFS. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 7c02d1889bbeabc73c95a4c83f0cd204365ff410)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  89 
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  94 +---
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 251 -
 4 files changed, 360 insertions(+), 78 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 50c839b..d1e5d3a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -374,7 +374,7 @@ abstract class InodeTree {
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(INodeDir dir)
-  throws URISyntaxException;
+  throws URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(String settings, URI[] mergeFsURIs)
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
@@ -393,7 +393,7 @@ abstract class InodeTree {
 return rootFallbackLink != null;
   }
 
-  private INodeLink getRootFallbackLink() {
+  protected INodeLink getRootFallbackLink() {
 Preconditions.checkState(root.isInternalDir());
 return rootFallbackLink;
   }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index a1fd14b..16a5e08 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -288,8 +288,9 @@ public class ViewFileSystem extends FileSystem {
 
 @Override
 protected FileSystem getTargetFileSystem(final INodeDir 
dir)
-  throws URISyntaxException {
-  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, 
config);
+throws URISyntaxException {
+  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, config,
+  this);
 }
 
 @Override
@@ -516,10 +517,10 @@ public class ViewFileSystem extends FileSystem {
   /**
* {@inheritDoc}
*
-   * Note: listStatus on root("/") considers listing from fallbackLink if
-   * available. If the same directory name is present in configured mount path
-   * as well as in fallback link, then only the configured mount path will be
-   * listed in the returned result.
+   * Note: listStatus considers listing from fallbackLink if available. If the
+   * same directory path is present in configured mount path as well as in
+   * fallback fs, then only the fallback path will be listed in the returned
+   * result except for link.
*
* If any of the the immediate children of the given path f is a 
symlink(mount
* link), the returned FileStatus object of that children would be 
represented
@@ -1086,11 +1087,13 @@ public class ViewFileSystem extends FileSystem {
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
 private final boolean showMountLinksAsSymlinks;
+private InodeTree fsState;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
-Configuration config) throws URISyntaxException {
+Configuration config, InodeTree fsState) throws URISyntaxException {
   myUri = uri;
+  this.fsState = fsState;
   try {
 initialize(myUri, config);
   } catch (IOException e) {
@@ -1186,7 +1189,8 @@ public class ViewFileSystem extends FileSystem {
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
   FileStatus[] fallbackStatuses = listStatusForFallbackLink();
-  FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
+  Set linkStatuses = new HashSet<>();
+  Set internalDirStatus

[hadoop] branch branch-3.2 updated: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new d030f4b  HDFS-15429. mkdirs should work when parent dir is an 
internalDir and fallback configured. Contributed by Uma Maheswara Rao G.
d030f4b is described below

commit d030f4b2a632b97d0ceda4216880d5690ac5ee14
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 26 01:29:38 2020 -0700

HDFS-15429. mkdirs should work when parent dir is an internalDir and 
fallback configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit d5e1bb6155496cf9d82e121dd1b65d0072312197)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  25 ++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  28 +-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 229 +---
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 297 +
 4 files changed, 542 insertions(+), 37 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 16a5e08..c960a21 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1300,6 +1300,31 @@ public class ViewFileSystem extends FileSystem {
   dir.toString().substring(1))) {
 return true; // this is the stupid semantics of FileSystem
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+
+try {
+  return linkedFallbackFs.mkdirs(dirToCreate, permission);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg =
+new StringBuilder("Failed to create ").append(dirToCreate)
+.append(" at fallback : ")
+.append(linkedFallbackFs.getUri());
+LOG.debug(msg.toString(), e);
+  }
+  return false;
+}
+  }
+
   throw readOnlyMountTable("mkdirs",  dir);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index b10c897..770f43b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -1127,11 +1127,35 @@ public class ViewFs extends AbstractFileSystem {
 
 @Override
 public void mkdir(final Path dir, final FsPermission permission,
-final boolean createParent) throws AccessControlException,
-FileAlreadyExistsException {
+final boolean createParent) throws IOException {
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+try {
+  // We are here because, the parent dir already exist in the mount
+  // table internal tree. So, let's create parent always in fallback.
+  linkedFallbackFs.mkdir(dirToCreate, permission, true);
+  return;
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+  throw e;
+}
+  }
+
   throw readOnlyMountTable("mkdir", dir);
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/t

[hadoop] branch branch-3.2 updated: HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as non symlinks. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 6c22210  HDFS-15418. ViewFileSystemOverloadScheme should represent 
mount links as non symlinks. Contributed by Uma Maheswara Rao G.
6c22210 is described below

commit 6c22210baa3591ce4e280d244891399075e47424
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jun 20 00:32:02 2020 -0700

HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as 
non symlinks. Contributed by Uma Maheswara Rao G.

(cherry picked from commit b27810aa6015253866ccc0ccc7247ad7024c0730)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 ++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  71 +++
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  20 +++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  80 -
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 132 +
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  42 +++
 ...SystemOverloadSchemeHdfsFileSystemContract.java |   5 +
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |   9 ++
 9 files changed, 295 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 0a5d4b4..f454f63 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -90,4 +90,12 @@ public interface Constants {
   String CONFIG_VIEWFS_ENABLE_INNER_CACHE = "fs.viewfs.enable.inner.cache";
 
   boolean CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT = true;
+
+  /**
+   * Enable ViewFileSystem to show mountlinks as symlinks.
+   */
+  String CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS =
+  "fs.viewfs.mount.links.as.symlinks";
+
+  boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index e2d8eac..a1fd14b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.fs.viewfs;
 
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
 
 import java.io.FileNotFoundException;
@@ -525,10 +527,18 @@ public class ViewFileSystem extends FileSystem {
* the target path FileStatus object. The target path will be available via
* getSymlink on that children's FileStatus object. Since it represents as
* symlink, isDirectory on that children's FileStatus will return false.
+   * This behavior can be changed by setting an advanced configuration
+   * fs.viewfs.mount.links.as.symlinks to false. In this case, mount points 
will
+   * be represented as non-symlinks and all the file/directory attributes like
+   * permissions, isDirectory etc will be assigned from it's resolved target
+   * directory/file.
*
* If you want to get the FileStatus of target path for that children, you 
may
* want to use GetFileStatus API with that children's symlink path. Please 
see
* {@link ViewFileSystem#getFileStatus(Path f)}
+   *
+   * Note: In ViewFileSystem, by default the mount links are represented as
+   * symlinks.
*/
   @Override
   public FileStatus[] listStatus(final Path f) throws AccessControlException,
@@ -1075,6 +1085,7 @@ public class ViewFileSystem extends FileSystem {
 final long creationTime; // of the the mount table
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
+private final boolean showMountLinksAsSymlinks;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
@@ -1088,6 +1099,9 @@ public class ViewFileSystem extends FileSystem {
   theInternalDir = dir;
   creationTime = cTime;
   this.ugi = ugi;
+  showMountLinksAsSymlinks = config
+  .getBoolean(CONFIG_VIEWFS_MOU

[hadoop] branch branch-3.2 updated: HADOOP-17060. Clarify listStatus and getFileStatus behaviors inconsistent in the case of ViewFs implementation for isDirectory. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 4a9ec4d  HADOOP-17060. Clarify listStatus and getFileStatus behaviors 
inconsistent in the case of ViewFs implementation for isDirectory. Contributed 
by Uma Maheswara Rao G.
4a9ec4d is described below

commit 4a9ec4d1431c819870444915a43283089a783443
Author: Uma Maheswara Rao G 
AuthorDate: Wed Jun 10 15:00:02 2020 -0700

HADOOP-17060. Clarify listStatus and getFileStatus behaviors inconsistent 
in the case of ViewFs implementation for isDirectory. Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit 93b121a9717bb4ef5240fda877ebb5275f6446b4)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 36 --
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 24 +++
 .../src/main/java/org/apache/hadoop/fs/Hdfs.java   | 22 +
 .../apache/hadoop/hdfs/DistributedFileSystem.java  | 25 ---
 4 files changed, 94 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 7552c06..e2d8eac 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -486,6 +486,14 @@ public class ViewFileSystem extends FileSystem {
 : new ViewFsFileStatus(orig, qualified);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * If the given path is a symlink(mount link), the path will be resolved to a
+   * target path and it will get the resolved path's FileStatus object. It will
+   * not be represented as a symlink and isDirectory API returns true if the
+   * resolved path is a directory, false otherwise.
+   */
   @Override
   public FileStatus getFileStatus(final Path f) throws AccessControlException,
   FileNotFoundException, IOException {
@@ -503,6 +511,25 @@ public class ViewFileSystem extends FileSystem {
 res.targetFileSystem.access(res.remainingPath, mode);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * Note: listStatus on root("/") considers listing from fallbackLink if
+   * available. If the same directory name is present in configured mount path
+   * as well as in fallback link, then only the configured mount path will be
+   * listed in the returned result.
+   *
+   * If any of the the immediate children of the given path f is a 
symlink(mount
+   * link), the returned FileStatus object of that children would be 
represented
+   * as a symlink. It will not be resolved to the target path and will not get
+   * the target path FileStatus object. The target path will be available via
+   * getSymlink on that children's FileStatus object. Since it represents as
+   * symlink, isDirectory on that children's FileStatus will return false.
+   *
+   * If you want to get the FileStatus of target path for that children, you 
may
+   * want to use GetFileStatus API with that children's symlink path. Please 
see
+   * {@link ViewFileSystem#getFileStatus(Path f)}
+   */
   @Override
   public FileStatus[] listStatus(final Path f) throws AccessControlException,
   FileNotFoundException, IOException {
@@ -1135,20 +1162,11 @@ public class ViewFileSystem extends FileSystem {
   checkPathIsSlash(f);
   return new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-
   new Path(theInternalDir.fullPath).makeQualified(
   myUri, ROOT_PATH));
 }
 
 
-/**
- * {@inheritDoc}
- *
- * Note: listStatus on root("/") considers listing from fallbackLink if
- * available. If the same directory name is present in configured mount
- * path as well as in fallback link, then only the configured mount path
- * will be listed in the returned result.
- */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 8cebc76..5d06b30 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -351,6 +351,14 @@ public class ViewFs extends AbstractFileSystem {
 return res.targetFileSystem.getFileChecksum(res.remainingPath);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * If the giv

[hadoop] branch branch-3.2 updated: HDFS-15396. Fix TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. Contributed by Ayush Saxena.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new fc9445a  HDFS-15396. Fix 
TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. 
Contributed by Ayush Saxena.
fc9445a is described below

commit fc9445abbddac906b3258df2414a39ac72426885
Author: Ayush Saxena 
AuthorDate: Mon Jun 8 01:59:10 2020 +0530

HDFS-15396. Fix 
TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. 
Contributed by Ayush Saxena.

(cherry picked from commit a8610c15c498531bf3c011f1b0ace8ef61f2)
---
 .../src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java  | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index a19366e..7552c06 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1163,6 +1163,9 @@ public class ViewFileSystem extends FileSystem {
   INodeLink link = (INodeLink) inode;
   try {
 String linkedPath = link.getTargetFileSystem().getUri().getPath();
+if("".equals(linkedPath)) {
+  linkedPath = "/";
+}
 FileStatus status =
 ((ChRootedFileSystem)link.getTargetFileSystem())
 .getMyFs().getFileStatus(new Path(linkedPath));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17029. Return correct permission and owner for listing on internal directories in ViewFs. Contributed by Abhishek Das.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new db5c32d  HADOOP-17029. Return correct permission and owner for listing 
on internal directories in ViewFs. Contributed by Abhishek Das.
db5c32d is described below

commit db5c32d2a56bad58cc4403305e60469aa7f1d854
Author: Abhishek Das 
AuthorDate: Fri Jun 5 14:56:51 2020 -0700

HADOOP-17029. Return correct permission and owner for listing on internal 
directories in ViewFs. Contributed by Abhishek Das.

(cherry picked from commit e7dd02768b658b2a1f216fbedc65938d9b6ca6e9)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  27 +++--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  41 +--
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java | 118 -
 3 files changed, 146 insertions(+), 40 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index bdf429e..a19366e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1161,13 +1161,26 @@ public class ViewFileSystem extends FileSystem {
 INode inode = iEntry.getValue();
 if (inode.isLink()) {
   INodeLink link = (INodeLink) inode;
-
-  result[i++] = new FileStatus(0, false, 0, 0,
-creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-link.getTargetLink(),
-new Path(inode.fullPath).makeQualified(
-myUri, null));
+  try {
+String linkedPath = link.getTargetFileSystem().getUri().getPath();
+FileStatus status =
+((ChRootedFileSystem)link.getTargetFileSystem())
+.getMyFs().getFileStatus(new Path(linkedPath));
+result[i++] = new FileStatus(status.getLen(), false,
+  status.getReplication(), status.getBlockSize(),
+  status.getModificationTime(), status.getAccessTime(),
+  status.getPermission(), status.getOwner(), status.getGroup(),
+  link.getTargetLink(),
+  new Path(inode.fullPath).makeQualified(
+  myUri, null));
+  } catch (FileNotFoundException ex) {
+result[i++] = new FileStatus(0, false, 0, 0,
+  creationTime, creationTime, PERMISSION_555,
+  ugi.getShortUserName(), ugi.getPrimaryGroupName(),
+  link.getTargetLink(),
+  new Path(inode.fullPath).makeQualified(
+  myUri, null));
+  }
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index dde6649..8cebc76 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -910,11 +910,25 @@ public class ViewFs extends AbstractFileSystem {
   if (inode.isLink()) {
 INodeLink inodelink = 
   (INodeLink) inode;
-result = new FileStatus(0, false, 0, 0, creationTime, creationTime,
+try {
+  String linkedPath = inodelink.getTargetFileSystem()
+  .getUri().getPath();
+  FileStatus status = ((ChRootedFs)inodelink.getTargetFileSystem())
+  .getMyFs().getFileStatus(new Path(linkedPath));
+  result = new FileStatus(status.getLen(), false,
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(), status.getGroup(),
+inodelink.getTargetLink(),
+new Path(inode.fullPath).makeQualified(
+myUri, null));
+} catch (FileNotFoundException ex) {
+  result = new FileStatus(0, false, 0, 0, creationTime, creationTime,
 PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 inodelink.getTargetLink(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
+}
   } else {
 result = new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
@@ -969,12 +983,25 @@ public class ViewFs extends

[hadoop] branch branch-3.2 updated: HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new e7cead1  HDFS-15330. Document the ViewFSOverloadScheme details in 
ViewFS guide. Contributed by Uma Maheswara Rao G.
e7cead1 is described below

commit e7cead114314412d461ed1c4b28ba33726c65b9b
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 5 10:58:21 2020 -0700

HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 76fa0222f0d2e2d92b4a1eedba8b3e38002e8c23)
(cherry picked from commit 418580446b65be3a0674762e76fc2cb9a1e5629a)
---
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |  40 -
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|   6 +
 .../src/site/markdown/ViewFsOverloadScheme.md  | 163 +
 .../site/resources/images/ViewFSOverloadScheme.png | Bin 0 -> 190004 bytes
 hadoop-project/src/site/site.xml   |   1 +
 5 files changed, 209 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 306884c..32e1a7b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -672,4 +672,42 @@ Usage: `hdfs debug recoverLease -path  [-retries 
]`
 | [`-path` *path*] | HDFS path for which to recover the lease. |
 | [`-retries` *num-retries*] | Number of times the client will retry calling 
recoverLease. The default number of retries is 1. |
 
-Recover the lease on the specified path. The path must reside on an HDFS 
filesystem. The default number of retries is 1.
+Recover the lease on the specified path. The path must reside on an HDFS file 
system. The default number of retries is 1.
+
+dfsadmin with ViewFsOverloadScheme
+--
+
+Usage: `hdfs dfsadmin -fs  `
+
+| COMMAND\_OPTION | Description |
+|: |: |
+| `-fs` *child fs mount link URI* | Its a logical mount link path to child 
file system in ViewFS world. This uri typically formed as src mount link 
prefixed with fs.defaultFS. Please note, this is not an actual child file 
system uri, instead its a logical mount link uri pointing to actual child file 
system|
+
+Example command usage:
+   `hdfs dfsadmin -fs hdfs://nn1 -safemode enter`
+
+In ViewFsOverloadScheme, we may have multiple child file systems as mount 
point mappings as shown in [ViewFsOverloadScheme 
Guide](./ViewFsOverloadScheme.html). Here -fs option is an optional generic 
parameter supported by dfsadmin. When users want to execute commands on one of 
the child file system, they need to pass that file system mount mapping link 
uri to -fs option. Let's take an example mount link configuration and dfsadmin 
command below.
+
+Mount link:
+
+```xml
+
+  fs.defaultFS
+  hdfs://MyCluster1
+
+
+
+  fs.viewfs.mounttable.MyCluster1./user
+  hdfs://MyCluster2/user
+   hdfs://MyCluster2/user
+   mount link path: /user
+   mount link uri: hdfs://MyCluster1/user
+   mount target uri for /user: hdfs://MyCluster2/user -->
+
+```
+
+If user wants to talk to `hdfs://MyCluster2/`, then they can pass -fs option 
(`-fs hdfs://MyCluster1/user`)
+Since /user was mapped to a cluster `hdfs://MyCluster2/user`, dfsadmin resolve 
the passed (`-fs hdfs://MyCluster1/user`) to target fs 
(`hdfs://MyCluster2/user`).
+This way users can get the access to all hdfs child file systems in 
ViewFsOverloadScheme.
+If there is no `-fs` option provided, then it will try to connect to the 
configured fs.defaultFS cluster if a cluster running with the fs.defaultFS uri.
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
index f851ef6..52ad49c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
@@ -361,6 +361,12 @@ resume its work, it's a good idea to provision some sort 
of cron job to purge su
 
 Delegation tokens for the cluster to which you are submitting the job 
(including all mounted volumes for that cluster’s mount table), and for input 
and output paths to your map-reduce job (including all volumes mounted via 
mount tables for the specified input and output paths) are all handled 
automatically. In addition, there is a way to add additional delegation tokens 
to the base cluster configuration for special circumstances.
 
+Don't want to change scheme or difficult to copy mount-table configurations to 
all clients?
+---
+
+Please refer to the [View File System Overload Scheme 

[hadoop] branch branch-3.2 updated: HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new aa8de2f  HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme 
in processPath. Contributed by Uma Maheswara Rao G.
aa8de2f is described below

commit aa8de2f43b49ac9ad328de39fd5e69dd703ba461
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 12 14:32:19 2020 -0700

HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 785b1def959fab6b8b766410bcd240feee13)
(cherry picked from commit 120ee793fc4bcbf9d1945d5e38e3ad5b2b290a0e)
---
 .../java/org/apache/hadoop/fs/shell/FsUsage.java   |   3 +-
 .../hadoop/fs/viewfs/ViewFileSystemUtil.java   |  14 +-
 ...ViewFileSystemOverloadSchemeWithFSCommands.java | 173 +
 3 files changed, 188 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
index 6596527..64aade3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
@@ -128,7 +128,8 @@ class FsUsage extends FsCommand {
 
 @Override
 protected void processPath(PathData item) throws IOException {
-  if (ViewFileSystemUtil.isViewFileSystem(item.fs)) {
+  if (ViewFileSystemUtil.isViewFileSystem(item.fs)
+  || ViewFileSystemUtil.isViewFileSystemOverloadScheme(item.fs)) {
 ViewFileSystem viewFileSystem = (ViewFileSystem) item.fs;
 Map  fsStatusMap =
 ViewFileSystemUtil.getStatus(viewFileSystem, item.path);
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
index c8a1d78..f486a10 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
@@ -52,6 +52,17 @@ public final class ViewFileSystemUtil {
   }
 
   /**
+   * Check if the FileSystem is a ViewFileSystemOverloadScheme.
+   *
+   * @param fileSystem
+   * @return true if the fileSystem is ViewFileSystemOverloadScheme
+   */
+  public static boolean isViewFileSystemOverloadScheme(
+  final FileSystem fileSystem) {
+return fileSystem instanceof ViewFileSystemOverloadScheme;
+  }
+
+  /**
* Get FsStatus for all ViewFsMountPoints matching path for the given
* ViewFileSystem.
*
@@ -93,7 +104,8 @@ public final class ViewFileSystemUtil {
*/
   public static Map getStatus(
   FileSystem fileSystem, Path path) throws IOException {
-if (!isViewFileSystem(fileSystem)) {
+if (!(isViewFileSystem(fileSystem)
+|| isViewFileSystemOverloadScheme(fileSystem))) {
   throw new UnsupportedFileSystemException("FileSystem '"
   + fileSystem.getUri() + "'is not a ViewFileSystem.");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
new file mode 100644
index 000..a974377
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
@@ -0,0 +1,173 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.net.URI;
+import java.net.UR

[hadoop] branch branch-3.2 updated: HDFS-15389. DFSAdmin should close filesystem and dfsadmin -setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by Ayush Saxena

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new e804218  HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena
e804218 is described below

commit e80421820a2dde7a59a470dbae60882e55102e73
Author: Ayush Saxena 
AuthorDate: Sat Jun 6 10:49:38 2020 +0530

HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena

(cherry picked from commit cc671b16f7b0b7c1ed7b41b96171653dc43cf670)
(cherry picked from commit bee2846bee4ae676bdc14585f8a3927a9dd7df37)
---
 .../java/org/apache/hadoop/hdfs/tools/DFSAdmin.java  | 13 +++--
 ...TestViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
index 5fdc835..ae66064 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
@@ -479,9 +479,9 @@ public class DFSAdmin extends FsShell {
   public DFSAdmin(Configuration conf) {
 super(conf);
   }
-  
+
   protected DistributedFileSystem getDFS() throws IOException {
-return AdminHelper.getDFS(getConf());
+return AdminHelper.checkAndGetDFS(getFS(), getConf());
   }
   
   /**
@@ -1036,14 +1036,7 @@ public class DFSAdmin extends FsShell {
   System.err.println("Bandwidth should be a non-negative integer");
   return exitCode;
 }
-
-FileSystem fs = getFS();
-if (!(fs instanceof DistributedFileSystem)) {
-  System.err.println("FileSystem is " + fs.getUri());
-  return exitCode;
-}
-
-DistributedFileSystem dfs = (DistributedFileSystem) fs;
+DistributedFileSystem dfs = getDFS();
 try{
   dfs.setBalancerBandwidth(bandwidth);
   System.out.println("Balancer bandwidth is set to " + bandwidth);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
index 1961dc2..a9475dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
@@ -263,4 +263,24 @@ public class TestViewFileSystemOverloadSchemeWithDFSAdmin {
 assertOutMsg("Disallowing snapshot on / succeeded", 1);
 assertEquals(0, ret);
   }
+
+  /**
+   * Tests setBalancerBandwidth with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testSetBalancerBandwidth() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+final DFSAdmin dfsAdmin = new DFSAdmin(conf);
+redirectStream();
+int ret = ToolRunner.run(dfsAdmin,
+new String[] {"-fs", defaultFSURI.toString(), "-setBalancerBandwidth",
+"1000"});
+assertOutMsg("Balancer bandwidth is set to 1000", 0);
+assertEquals(0, ret);
+  }
 }
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 84ceb6d  HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's 
scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.
84ceb6d is described below

commit 84ceb6d5204b08e0044a592d4b21625b57e015f6
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 21 21:34:58 2020 -0700

HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and 
target uris schemes are same. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 4734c77b4b64b7c6432da4cc32881aba85f94ea1)
(cherry picked from commit 8e71e85af70c17f2350f794f8bc2475eb1e3acea)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  15 ++-
 .../java/org/apache/hadoop/fs/viewfs/FsGetter.java |  47 
 .../fs/viewfs/HCFSMountTableConfigLoader.java  |   3 +-
 .../org/apache/hadoop/fs/viewfs/NflyFSystem.java   |  29 -
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  24 +---
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |   1 -
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  28 -
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 121 +
 8 files changed, 230 insertions(+), 38 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 4c3dae9..6dd1f65 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -136,6 +136,17 @@ public class ConfigUtil {
   }
 
   /**
+   * Add nfly link to configuration for the given mount table.
+   */
+  public static void addLinkNfly(Configuration conf, String mountTableName,
+  String src, String settings, final String targets) {
+conf.set(
+getConfigViewFsPrefix(mountTableName) + "."
++ Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+targets);
+  }
+
+  /**
*
* @param conf
* @param mountTableName
@@ -149,9 +160,7 @@ public class ConfigUtil {
 settings = settings == null
 ? "minReplication=2,repairOnRead=true"
 : settings;
-
-conf.set(getConfigViewFsPrefix(mountTableName) + "." +
-Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+addLinkNfly(conf, mountTableName, src, settings,
 StringUtils.uriToString(targets));
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
new file mode 100644
index 000..071af11
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+/**
+ * File system instance getter.
+ */
+@Private
+class FsGetter {
+
+  /**
+   * Gets new file system instance of given uri.
+   */
+  public FileSystem getNewInstance(URI uri, Configuration conf)
+  throws IOException {
+return FileSystem.newInstance(uri, conf);
+  }
+
+  /**
+   * Gets file system instance of given uri.
+   */
+  public FileSystem get(URI uri, Configuration conf) throws IOException {
+return FileSystem.get(uri, conf);
+  }
+}
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs

[hadoop] branch branch-3.2 updated: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 852ee71  HDFS-15321. Make DFSAdmin tool to work with 
ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.
852ee71 is described below

commit 852ee713548744a0709e03a8a743cbe69e436764
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 2 11:09:26 2020 -0700

HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit ed83c865dd0b4e92f3f89f79543acc23792bb69c)
(cherry picked from commit 0b5e202614f0bc20a0db6656f924fa4d2741d00c)
---
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  29 +++
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |   2 +-
 .../org/apache/hadoop/hdfs/tools/AdminHelper.java  |  25 +-
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java |  13 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 266 +
 5 files changed, 317 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index f5952d5..36f9cd1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.fs.viewfs;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
@@ -27,6 +28,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 
 /**
@@ -227,4 +229,31 @@ public class ViewFileSystemOverloadScheme extends 
ViewFileSystem {
 
   }
 
+  /**
+   * This is an admin only API to give access to its child raw file system, if
+   * the path is link. If the given path is an internal directory(path is from
+   * mount paths tree), it will initialize the file system of given path uri
+   * directly. If path cannot be resolved to any internal directory or link, it
+   * will throw NotInMountpointException. Please note, this API will not return
+   * chrooted file system. Instead, this API will get actual raw file system
+   * instances.
+   *
+   * @param path - fs uri path
+   * @param conf - configuration
+   * @throws IOException
+   */
+  public FileSystem getRawFileSystem(Path path, Configuration conf)
+  throws IOException {
+InodeTree.ResolveResult res;
+try {
+  res = fsState.resolve(getUriPath(path), true);
+  return res.isInternalDir() ? fsGetter().get(path.toUri(), conf)
+  : ((ChRootedFileSystem) res.targetFileSystem).getMyFs();
+} catch (FileNotFoundException e) {
+  // No link configured with passed path.
+  throw new NotInMountpointException(path,
+  "No link found for the given path.");
+}
+  }
+
 }
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
index f051c9c..efced73 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
@@ -192,7 +192,7 @@ public class ViewFsTestSetup {
* Adds the given mount links to the configuration. Mount link mappings are
* in sources, targets at their respective index locations.
*/
-  static void addMountLinksToConf(String mountTable, String[] sources,
+  public static void addMountLinksToConf(String mountTable, String[] sources,
   String[] targets, Configuration config) throws URISyntaxException {
 for (int i = 0; i < sources.length; i++) {
   String src = sources[i];
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
index 9cb646b..27cdf70 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
@@ -1,4 +1,5 @@
 /**
+

[hadoop] branch branch-3.2 updated: HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root). Contributed by Abhishek Das.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 7cf9601  HADOOP-17024. ListStatus on ViewFS root (ls "/") should list 
the linkFallBack root (configured target root). Contributed by Abhishek Das.
7cf9601 is described below

commit 7cf96019870a3042ae7dcf1a3575c625703404a7
Author: Abhishek Das 
AuthorDate: Mon May 18 22:27:12 2020 -0700

HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the 
linkFallBack root (configured target root). Contributed by Abhishek Das.

(cherry picked from commit ce4ec7445345eb94c6741d416814a4eac319f0a6)
(cherry picked from commit 5b248de42d2ae42710531a1514a21d60a1fca4b2)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 49 ++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 51 ++-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 98 ++
 4 files changed, 209 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 6992343..50c839b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -123,6 +123,7 @@ abstract class InodeTree {
 private final Map> children = new HashMap<>();
 private T internalDirFs =  null; //filesystem of this internal directory
 private boolean isRoot = false;
+private INodeLink fallbackLink = null;
 
 INodeDir(final String pathToNode, final UserGroupInformation aUgi) {
   super(pathToNode, aUgi);
@@ -149,6 +150,17 @@ abstract class InodeTree {
   return isRoot;
 }
 
+INodeLink getFallbackLink() {
+  return fallbackLink;
+}
+
+void addFallbackLink(INodeLink link) throws IOException {
+  if (!isRoot) {
+throw new IOException("Fallback link can only be added for root");
+  }
+  this.fallbackLink = link;
+}
+
 Map> getChildren() {
   return Collections.unmodifiableMap(children);
 }
@@ -580,6 +592,7 @@ abstract class InodeTree {
 }
   }
   rootFallbackLink = fallbackLink;
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
 
 if (!gotMountTableEntry) {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index a13b6ea..f626ffe 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1161,10 +1161,19 @@ public class ViewFileSystem extends FileSystem {
 }
 
 
+/**
+ * {@inheritDoc}
+ *
+ * Note: listStatus on root("/") considers listing from fallbackLink if
+ * available. If the same directory name is present in configured mount
+ * path as well as in fallback link, then only the configured mount path
+ * will be listed in the returned result.
+ */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
+  FileStatus[] fallbackStatuses = listStatusForFallbackLink();
   FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
   int i = 0;
   for (Entry> iEntry :
@@ -1187,7 +1196,45 @@ public class ViewFileSystem extends FileSystem {
 myUri, null));
 }
   }
-  return result;
+  if (fallbackStatuses.length > 0) {
+return consolidateFileStatuses(fallbackStatuses, result);
+  } else {
+return result;
+  }
+}
+
+private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
+FileStatus[] mountPointStatuses) {
+  ArrayList result = new ArrayList<>();
+  Set pathSet = new HashSet<>();
+  for (FileStatus status : mountPointStatuses) {
+result.add(status);
+pathSet.add(status.getPath().getName());
+  }
+  for (FileStatus status : fallbackStatuses) {
+if (!pathSet.contains(status.getPath().getName())) {
+  result.add(status);
+}
+  }
+  return result.toArray(new FileStatus[0]);
+}
+
+private FileStatus[] listStatusForFallbackLink() throws IOException {
+  if (theInternalDir.isRoot() &&
+  theInternalDir.getFallbackLink() != null

[hadoop] branch branch-3.2 updated: HDFS-15306. Make mount-table to read from central place ( Let's say from HDFS). Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 3dcc9ae  HDFS-15306. Make mount-table to read from central place ( 
Let's say from HDFS). Contributed by Uma Maheswara Rao G.
3dcc9ae is described below

commit 3dcc9aed4d7ccd4fd495a206dbe614a9706db4e1
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 14 17:29:35 2020 -0700

HDFS-15306. Make mount-table to read from central place ( Let's say from 
HDFS). Contributed by Uma Maheswara Rao G.

(cherry picked from commit ac4a2e11d98827c7926a34cda27aa7bcfd3f36c1)
(cherry picked from commit 544996c85702af7ae241ef2f18e2597e2b4050be)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   5 +
 .../fs/viewfs/HCFSMountTableConfigLoader.java  | 122 ++
 .../hadoop/fs/viewfs/MountTableConfigLoader.java   |  44 +
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 180 -
 .../org/apache/hadoop/fs/viewfs/package-info.java  |  26 +++
 .../fs/viewfs/TestHCFSMountTableConfigLoader.java  | 165 +++
 ...iewFSOverloadSchemeCentralMountTableConfig.java |  77 +
 ...iewFileSystemOverloadSchemeLocalFileSystem.java |  47 --
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  71 +++-
 ...FSOverloadSchemeWithMountTableConfigInHDFS.java |  68 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 125 +-
 11 files changed, 797 insertions(+), 133 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 37f1a16..0a5d4b4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -30,6 +30,11 @@ public interface Constants {
* Prefix for the config variable prefix for the ViewFs mount-table
*/
   public static final String CONFIG_VIEWFS_PREFIX = "fs.viewfs.mounttable";
+
+  /**
+   * Prefix for the config variable for the ViewFs mount-table path.
+   */
+  String CONFIG_VIEWFS_MOUNTTABLE_PATH = CONFIG_VIEWFS_PREFIX + ".path";
  
   /**
* Prefix for the home dir for the mount table - if not specified
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
new file mode 100644
index 000..c7e5aab
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation for Apache Hadoop compatible file system based mount-table
+ * file loading.
+ */
+public class HCFSMountTableConfigLoader implements MountTableConfigLoader {
+  private static final String REGEX_DOT = "[.]";
+  private static final Logger LOGGER =
+  LoggerFactory.getLogger(HCFSMountTableConfigLoader.class);
+  private Path mountTable = null;
+
+  /**
+   * Loads the mount-table configuration from hadoop compatible file system and
+   * add the configuration items to given configuration. Mount-table
+   * configuration format should be suffixed with version number.
+   * Format: mount-table..xml
+   * Example: mount-table.1.xml
+   * When user wants to update mount-table, the expectation is to upload new
+   * mount-table configuration file with monoto

[hadoop] branch branch-3.3 updated: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 81e33d2  HDFS-15429. mkdirs should work when parent dir is an 
internalDir and fallback configured. Contributed by Uma Maheswara Rao G.
81e33d2 is described below

commit 81e33d22a0d83abc88e3cd2411a5f198430800c4
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 26 01:29:38 2020 -0700

HDFS-15429. mkdirs should work when parent dir is an internalDir and 
fallback configured. Contributed by Uma Maheswara Rao G.

(cherry picked from commit d5e1bb6155496cf9d82e121dd1b65d0072312197)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  25 ++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  28 +-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 229 +---
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 297 +
 4 files changed, 542 insertions(+), 37 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 06052b8..56448cb 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1339,6 +1339,31 @@ public class ViewFileSystem extends FileSystem {
   dir.toString().substring(1))) {
 return true; // this is the stupid semantics of FileSystem
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+
+try {
+  return linkedFallbackFs.mkdirs(dirToCreate, permission);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg =
+new StringBuilder("Failed to create ").append(dirToCreate)
+.append(" at fallback : ")
+.append(linkedFallbackFs.getUri());
+LOG.debug(msg.toString(), e);
+  }
+  return false;
+}
+  }
+
   throw readOnlyMountTable("mkdirs",  dir);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index d18233a..c769003 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -1134,11 +1134,35 @@ public class ViewFs extends AbstractFileSystem {
 
 @Override
 public void mkdir(final Path dir, final FsPermission permission,
-final boolean createParent) throws AccessControlException,
-FileAlreadyExistsException {
+final boolean createParent) throws IOException {
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+try {
+  // We are here because, the parent dir already exist in the mount
+  // table internal tree. So, let's create parent always in fallback.
+  linkedFallbackFs.mkdir(dirToCreate, permission, true);
+  return;
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+  throw e;
+}
+  }
+
   throw readOnlyMountTable("mkdir", dir);
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/t

[hadoop] branch branch-3.3 updated: HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as non symlinks. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 5f67c3f  HDFS-15418. ViewFileSystemOverloadScheme should represent 
mount links as non symlinks. Contributed by Uma Maheswara Rao G.
5f67c3f is described below

commit 5f67c3f3ca49d718d8fc7c1914c3d2b77b3462f0
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jun 20 00:32:02 2020 -0700

HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as 
non symlinks. Contributed by Uma Maheswara Rao G.

(cherry picked from commit b27810aa6015253866ccc0ccc7247ad7024c0730)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 ++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  71 +++
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  20 +++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  80 -
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 132 +
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  42 +++
 ...SystemOverloadSchemeHdfsFileSystemContract.java |   5 +
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |   9 ++
 9 files changed, 295 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 0a5d4b4..f454f63 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -90,4 +90,12 @@ public interface Constants {
   String CONFIG_VIEWFS_ENABLE_INNER_CACHE = "fs.viewfs.enable.inner.cache";
 
   boolean CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT = true;
+
+  /**
+   * Enable ViewFileSystem to show mountlinks as symlinks.
+   */
+  String CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS =
+  "fs.viewfs.mount.links.as.symlinks";
+
+  boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 895edc0..1ee06e0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.fs.viewfs;
 import static 
org.apache.hadoop.fs.impl.PathCapabilitiesSupport.validatePathCapabilityArgs;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
 
 import java.io.FileNotFoundException;
@@ -527,10 +529,18 @@ public class ViewFileSystem extends FileSystem {
* the target path FileStatus object. The target path will be available via
* getSymlink on that children's FileStatus object. Since it represents as
* symlink, isDirectory on that children's FileStatus will return false.
+   * This behavior can be changed by setting an advanced configuration
+   * fs.viewfs.mount.links.as.symlinks to false. In this case, mount points 
will
+   * be represented as non-symlinks and all the file/directory attributes like
+   * permissions, isDirectory etc will be assigned from it's resolved target
+   * directory/file.
*
* If you want to get the FileStatus of target path for that children, you 
may
* want to use GetFileStatus API with that children's symlink path. Please 
see
* {@link ViewFileSystem#getFileStatus(Path f)}
+   *
+   * Note: In ViewFileSystem, by default the mount links are represented as
+   * symlinks.
*/
   @Override
   public FileStatus[] listStatus(final Path f) throws AccessControlException,
@@ -1114,6 +1124,7 @@ public class ViewFileSystem extends FileSystem {
 final long creationTime; // of the the mount table
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
+private final boolean showMountLinksAsSymlinks;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
@@ -1127,6 +1138,9 @@ public class ViewFileSystem extends FileSystem {
   theInternalDir = dir;
   creationTime = cTime;
 

[hadoop] branch branch-3.3 updated: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 29a8ee4  HDFS-15427. Merged ListStatus with Fallback target filesystem 
and InternalDirViewFS. Contributed by Uma Maheswara Rao G.
29a8ee4 is described below

commit 29a8ee4be639a18648df34a64bee6b413d1dcaf7
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 23 01:42:25 2020 -0700

HDFS-15427. Merged ListStatus with Fallback target filesystem and 
InternalDirViewFS. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 7c02d1889bbeabc73c95a4c83f0cd204365ff410)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  89 
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  94 +---
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 251 -
 4 files changed, 360 insertions(+), 78 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 50c839b..d1e5d3a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -374,7 +374,7 @@ abstract class InodeTree {
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(INodeDir dir)
-  throws URISyntaxException;
+  throws URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(String settings, URI[] mergeFsURIs)
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
@@ -393,7 +393,7 @@ abstract class InodeTree {
 return rootFallbackLink != null;
   }
 
-  private INodeLink getRootFallbackLink() {
+  protected INodeLink getRootFallbackLink() {
 Preconditions.checkState(root.isInternalDir());
 return rootFallbackLink;
   }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 1ee06e0..06052b8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -290,8 +290,9 @@ public class ViewFileSystem extends FileSystem {
 
 @Override
 protected FileSystem getTargetFileSystem(final INodeDir 
dir)
-  throws URISyntaxException {
-  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, 
config);
+throws URISyntaxException {
+  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, config,
+  this);
 }
 
 @Override
@@ -518,10 +519,10 @@ public class ViewFileSystem extends FileSystem {
   /**
* {@inheritDoc}
*
-   * Note: listStatus on root("/") considers listing from fallbackLink if
-   * available. If the same directory name is present in configured mount path
-   * as well as in fallback link, then only the configured mount path will be
-   * listed in the returned result.
+   * Note: listStatus considers listing from fallbackLink if available. If the
+   * same directory path is present in configured mount path as well as in
+   * fallback fs, then only the fallback path will be listed in the returned
+   * result except for link.
*
* If any of the the immediate children of the given path f is a 
symlink(mount
* link), the returned FileStatus object of that children would be 
represented
@@ -1125,11 +1126,13 @@ public class ViewFileSystem extends FileSystem {
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
 private final boolean showMountLinksAsSymlinks;
+private InodeTree fsState;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
-Configuration config) throws URISyntaxException {
+Configuration config, InodeTree fsState) throws URISyntaxException {
   myUri = uri;
+  this.fsState = fsState;
   try {
 initialize(myUri, config);
   } catch (IOException e) {
@@ -1225,7 +1228,8 @@ public class ViewFileSystem extends FileSystem {
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
   FileStatus[] fallbackStatuses = listStatusForFallbackLink();
-  FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
+  Set linkStatuses = new HashSet<>();
+  Set internalDirStatus

[hadoop] branch branch-3.3 updated: HADOOP-17060. Clarify listStatus and getFileStatus behaviors inconsistent in the case of ViewFs implementation for isDirectory. Contributed by Uma Maheswara Rao G.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 3cddd0b  HADOOP-17060. Clarify listStatus and getFileStatus behaviors 
inconsistent in the case of ViewFs implementation for isDirectory. Contributed 
by Uma Maheswara Rao G.
3cddd0b is described below

commit 3cddd0be29afc3405b33e59e45c17e7564239b74
Author: Uma Maheswara Rao G 
AuthorDate: Wed Jun 10 15:00:02 2020 -0700

HADOOP-17060. Clarify listStatus and getFileStatus behaviors inconsistent 
in the case of ViewFs implementation for isDirectory. Contributed by Uma 
Maheswara Rao G.

(cherry picked from commit 93b121a9717bb4ef5240fda877ebb5275f6446b4)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 36 --
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 24 +++
 .../src/main/java/org/apache/hadoop/fs/Hdfs.java   | 22 +
 .../apache/hadoop/hdfs/DistributedFileSystem.java  | 25 ---
 4 files changed, 94 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 56d0fc5..895edc0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -488,6 +488,14 @@ public class ViewFileSystem extends FileSystem {
 : new ViewFsFileStatus(orig, qualified);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * If the given path is a symlink(mount link), the path will be resolved to a
+   * target path and it will get the resolved path's FileStatus object. It will
+   * not be represented as a symlink and isDirectory API returns true if the
+   * resolved path is a directory, false otherwise.
+   */
   @Override
   public FileStatus getFileStatus(final Path f) throws AccessControlException,
   FileNotFoundException, IOException {
@@ -505,6 +513,25 @@ public class ViewFileSystem extends FileSystem {
 res.targetFileSystem.access(res.remainingPath, mode);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * Note: listStatus on root("/") considers listing from fallbackLink if
+   * available. If the same directory name is present in configured mount path
+   * as well as in fallback link, then only the configured mount path will be
+   * listed in the returned result.
+   *
+   * If any of the the immediate children of the given path f is a 
symlink(mount
+   * link), the returned FileStatus object of that children would be 
represented
+   * as a symlink. It will not be resolved to the target path and will not get
+   * the target path FileStatus object. The target path will be available via
+   * getSymlink on that children's FileStatus object. Since it represents as
+   * symlink, isDirectory on that children's FileStatus will return false.
+   *
+   * If you want to get the FileStatus of target path for that children, you 
may
+   * want to use GetFileStatus API with that children's symlink path. Please 
see
+   * {@link ViewFileSystem#getFileStatus(Path f)}
+   */
   @Override
   public FileStatus[] listStatus(final Path f) throws AccessControlException,
   FileNotFoundException, IOException {
@@ -1174,20 +1201,11 @@ public class ViewFileSystem extends FileSystem {
   checkPathIsSlash(f);
   return new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-
   new Path(theInternalDir.fullPath).makeQualified(
   myUri, ROOT_PATH));
 }
 
 
-/**
- * {@inheritDoc}
- *
- * Note: listStatus on root("/") considers listing from fallbackLink if
- * available. If the same directory name is present in configured mount
- * path as well as in fallback link, then only the configured mount path
- * will be listed in the returned result.
- */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index df10dce..4578a4c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -351,6 +351,14 @@ public class ViewFs extends AbstractFileSystem {
 return res.targetFileSystem.getFileChecksum(res.remainingPath);
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * If the giv

[hadoop] branch branch-3.3 updated: HADOOP-17029. Return correct permission and owner for listing on internal directories in ViewFs. Contributed by Abhishek Das.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new c3bef49  HADOOP-17029. Return correct permission and owner for listing 
on internal directories in ViewFs. Contributed by Abhishek Das.
c3bef49 is described below

commit c3bef4906c34d7a97493f2bbdd3dc35e08324520
Author: Abhishek Das 
AuthorDate: Fri Jun 5 14:56:51 2020 -0700

HADOOP-17029. Return correct permission and owner for listing on internal 
directories in ViewFs. Contributed by Abhishek Das.

(cherry picked from commit e7dd02768b658b2a1f216fbedc65938d9b6ca6e9)
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  27 +++--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  41 +--
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java | 118 -
 3 files changed, 146 insertions(+), 40 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 2fde078..ddb3f2b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1200,13 +1200,26 @@ public class ViewFileSystem extends FileSystem {
 INode inode = iEntry.getValue();
 if (inode.isLink()) {
   INodeLink link = (INodeLink) inode;
-
-  result[i++] = new FileStatus(0, false, 0, 0,
-creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getPrimaryGroupName(),
-link.getTargetLink(),
-new Path(inode.fullPath).makeQualified(
-myUri, null));
+  try {
+String linkedPath = link.getTargetFileSystem().getUri().getPath();
+FileStatus status =
+((ChRootedFileSystem)link.getTargetFileSystem())
+.getMyFs().getFileStatus(new Path(linkedPath));
+result[i++] = new FileStatus(status.getLen(), false,
+  status.getReplication(), status.getBlockSize(),
+  status.getModificationTime(), status.getAccessTime(),
+  status.getPermission(), status.getOwner(), status.getGroup(),
+  link.getTargetLink(),
+  new Path(inode.fullPath).makeQualified(
+  myUri, null));
+  } catch (FileNotFoundException ex) {
+result[i++] = new FileStatus(0, false, 0, 0,
+  creationTime, creationTime, PERMISSION_555,
+  ugi.getShortUserName(), ugi.getPrimaryGroupName(),
+  link.getTargetLink(),
+  new Path(inode.fullPath).makeQualified(
+  myUri, null));
+  }
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index 607bdb8..df10dce 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -917,11 +917,25 @@ public class ViewFs extends AbstractFileSystem {
   if (inode.isLink()) {
 INodeLink inodelink = 
   (INodeLink) inode;
-result = new FileStatus(0, false, 0, 0, creationTime, creationTime,
+try {
+  String linkedPath = inodelink.getTargetFileSystem()
+  .getUri().getPath();
+  FileStatus status = ((ChRootedFs)inodelink.getTargetFileSystem())
+  .getMyFs().getFileStatus(new Path(linkedPath));
+  result = new FileStatus(status.getLen(), false,
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(), status.getGroup(),
+inodelink.getTargetLink(),
+new Path(inode.fullPath).makeQualified(
+myUri, null));
+} catch (FileNotFoundException ex) {
+  result = new FileStatus(0, false, 0, 0, creationTime, creationTime,
 PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 inodelink.getTargetLink(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
+}
   } else {
 result = new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
@@ -976,12 +990,25 @@ public class ViewFs extends

[hadoop] branch branch-3.3 updated: HDFS-15396. Fix TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. Contributed by Ayush Saxena.

2020-06-27 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 7b29019  HDFS-15396. Fix 
TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. 
Contributed by Ayush Saxena.
7b29019 is described below

commit 7b29019eeae2622957c086c7240f57681088d622
Author: Ayush Saxena 
AuthorDate: Mon Jun 8 01:59:10 2020 +0530

HDFS-15396. Fix 
TestViewFileSystemOverloadSchemeHdfsFileSystemContract#testListStatusRootDir. 
Contributed by Ayush Saxena.

(cherry picked from commit a8610c15c498531bf3c011f1b0ace8ef61f2)
---
 .../src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java  | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index ddb3f2b..56d0fc5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1202,6 +1202,9 @@ public class ViewFileSystem extends FileSystem {
   INodeLink link = (INodeLink) inode;
   try {
 String linkedPath = link.getTargetFileSystem().getUri().getPath();
+if("".equals(linkedPath)) {
+  linkedPath = "/";
+}
 FileStatus status =
 ((ChRootedFileSystem)link.getTargetFileSystem())
 .getMyFs().getFileStatus(new Path(linkedPath));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured. Contributed by Uma Maheswara Rao G.

2020-06-26 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d5e1bb6  HDFS-15429. mkdirs should work when parent dir is an 
internalDir and fallback configured. Contributed by Uma Maheswara Rao G.
d5e1bb6 is described below

commit d5e1bb6155496cf9d82e121dd1b65d0072312197
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 26 01:29:38 2020 -0700

HDFS-15429. mkdirs should work when parent dir is an internalDir and 
fallback configured. Contributed by Uma Maheswara Rao G.
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  25 ++
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  28 +-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 229 +---
 .../hadoop/fs/viewfs/TestViewFsLinkFallback.java   | 297 +
 4 files changed, 542 insertions(+), 37 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 06052b8..56448cb 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1339,6 +1339,31 @@ public class ViewFileSystem extends FileSystem {
   dir.toString().substring(1))) {
 return true; // this is the stupid semantics of FileSystem
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+
+try {
+  return linkedFallbackFs.mkdirs(dirToCreate, permission);
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg =
+new StringBuilder("Failed to create ").append(dirToCreate)
+.append(" at fallback : ")
+.append(linkedFallbackFs.getUri());
+LOG.debug(msg.toString(), e);
+  }
+  return false;
+}
+  }
+
   throw readOnlyMountTable("mkdirs",  dir);
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
index d18233a..c769003 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
@@ -1134,11 +1134,35 @@ public class ViewFs extends AbstractFileSystem {
 
 @Override
 public void mkdir(final Path dir, final FsPermission permission,
-final boolean createParent) throws AccessControlException,
-FileAlreadyExistsException {
+final boolean createParent) throws IOException {
   if (theInternalDir.isRoot() && dir == null) {
 throw new FileAlreadyExistsException("/ already exits");
   }
+
+  if (this.fsState.getRootFallbackLink() != null) {
+AbstractFileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path parent = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+String leafChild = (InodeTree.SlashPath.equals(dir)) ?
+InodeTree.SlashPath.toString() :
+dir.getName();
+Path dirToCreate = new Path(parent, leafChild);
+try {
+  // We are here because, the parent dir already exist in the mount
+  // table internal tree. So, let's create parent always in fallback.
+  linkedFallbackFs.mkdir(dirToCreate, permission, true);
+  return;
+} catch (IOException e) {
+  if (LOG.isDebugEnabled()) {
+StringBuilder msg = new StringBuilder("Failed to create {}")
+.append(" at fallback fs : {}");
+LOG.debug(msg.toString(), dirToCreate, linkedFallbackFs.getUri());
+  }
+  throw e;
+}
+  }
+
   throw readOnlyMountTable("mkdir", dir);
 }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
index f7f5453..bec261c

[hadoop] branch trunk updated: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS. Contributed by Uma Maheswara Rao G.

2020-06-23 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7c02d18  HDFS-15427. Merged ListStatus with Fallback target filesystem 
and InternalDirViewFS. Contributed by Uma Maheswara Rao G.
7c02d18 is described below

commit 7c02d1889bbeabc73c95a4c83f0cd204365ff410
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 23 01:42:25 2020 -0700

HDFS-15427. Merged ListStatus with Fallback target filesystem and 
InternalDirViewFS. Contributed by Uma Maheswara Rao G.
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |   4 +-
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  89 
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  94 +---
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 251 -
 4 files changed, 360 insertions(+), 78 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 50c839b..d1e5d3a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -374,7 +374,7 @@ abstract class InodeTree {
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(INodeDir dir)
-  throws URISyntaxException;
+  throws URISyntaxException, IOException;
 
   protected abstract T getTargetFileSystem(String settings, URI[] mergeFsURIs)
   throws UnsupportedFileSystemException, URISyntaxException, IOException;
@@ -393,7 +393,7 @@ abstract class InodeTree {
 return rootFallbackLink != null;
   }
 
-  private INodeLink getRootFallbackLink() {
+  protected INodeLink getRootFallbackLink() {
 Preconditions.checkState(root.isInternalDir());
 return rootFallbackLink;
   }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 1ee06e0..06052b8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -290,8 +290,9 @@ public class ViewFileSystem extends FileSystem {
 
 @Override
 protected FileSystem getTargetFileSystem(final INodeDir 
dir)
-  throws URISyntaxException {
-  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, 
config);
+throws URISyntaxException {
+  return new InternalDirOfViewFs(dir, creationTime, ugi, myUri, config,
+  this);
 }
 
 @Override
@@ -518,10 +519,10 @@ public class ViewFileSystem extends FileSystem {
   /**
* {@inheritDoc}
*
-   * Note: listStatus on root("/") considers listing from fallbackLink if
-   * available. If the same directory name is present in configured mount path
-   * as well as in fallback link, then only the configured mount path will be
-   * listed in the returned result.
+   * Note: listStatus considers listing from fallbackLink if available. If the
+   * same directory path is present in configured mount path as well as in
+   * fallback fs, then only the fallback path will be listed in the returned
+   * result except for link.
*
* If any of the the immediate children of the given path f is a 
symlink(mount
* link), the returned FileStatus object of that children would be 
represented
@@ -1125,11 +1126,13 @@ public class ViewFileSystem extends FileSystem {
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
 private final boolean showMountLinksAsSymlinks;
+private InodeTree fsState;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
-Configuration config) throws URISyntaxException {
+Configuration config, InodeTree fsState) throws URISyntaxException {
   myUri = uri;
+  this.fsState = fsState;
   try {
 initialize(myUri, config);
   } catch (IOException e) {
@@ -1225,7 +1228,8 @@ public class ViewFileSystem extends FileSystem {
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
   FileStatus[] fallbackStatuses = listStatusForFallbackLink();
-  FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
+  Set linkStatuses = new HashSet<>();
+  Set internalDirStatuses = new HashSet<>();
   int i = 0;
   for (Entry> iEntry :
   t

[hadoop] branch trunk updated: HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as non symlinks. Contributed by Uma Maheswara Rao G.

2020-06-20 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b27810a  HDFS-15418. ViewFileSystemOverloadScheme should represent 
mount links as non symlinks. Contributed by Uma Maheswara Rao G.
b27810a is described below

commit b27810aa6015253866ccc0ccc7247ad7024c0730
Author: Uma Maheswara Rao G 
AuthorDate: Sat Jun 20 00:32:02 2020 -0700

HDFS-15418. ViewFileSystemOverloadScheme should represent mount links as 
non symlinks. Contributed by Uma Maheswara Rao G.
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 ++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  71 +++
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  20 +++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  80 -
 .../viewfs/TestViewFsOverloadSchemeListStatus.java | 132 +
 .../hadoop/fs/viewfs/TestViewfsFileStatus.java |   4 +-
 .../src/site/markdown/ViewFsOverloadScheme.md  |  42 +++
 ...SystemOverloadSchemeHdfsFileSystemContract.java |   5 +
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java |   9 ++
 9 files changed, 295 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 0a5d4b4..f454f63 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -90,4 +90,12 @@ public interface Constants {
   String CONFIG_VIEWFS_ENABLE_INNER_CACHE = "fs.viewfs.enable.inner.cache";
 
   boolean CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT = true;
+
+  /**
+   * Enable ViewFileSystem to show mountlinks as symlinks.
+   */
+  String CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS =
+  "fs.viewfs.mount.links.as.symlinks";
+
+  boolean CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT = true;
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 895edc0..1ee06e0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.fs.viewfs;
 import static 
org.apache.hadoop.fs.impl.PathCapabilitiesSupport.validatePathCapabilityArgs;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE;
 import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE_DEFAULT;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS;
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS_DEFAULT;
 import static org.apache.hadoop.fs.viewfs.Constants.PERMISSION_555;
 
 import java.io.FileNotFoundException;
@@ -527,10 +529,18 @@ public class ViewFileSystem extends FileSystem {
* the target path FileStatus object. The target path will be available via
* getSymlink on that children's FileStatus object. Since it represents as
* symlink, isDirectory on that children's FileStatus will return false.
+   * This behavior can be changed by setting an advanced configuration
+   * fs.viewfs.mount.links.as.symlinks to false. In this case, mount points 
will
+   * be represented as non-symlinks and all the file/directory attributes like
+   * permissions, isDirectory etc will be assigned from it's resolved target
+   * directory/file.
*
* If you want to get the FileStatus of target path for that children, you 
may
* want to use GetFileStatus API with that children's symlink path. Please 
see
* {@link ViewFileSystem#getFileStatus(Path f)}
+   *
+   * Note: In ViewFileSystem, by default the mount links are represented as
+   * symlinks.
*/
   @Override
   public FileStatus[] listStatus(final Path f) throws AccessControlException,
@@ -1114,6 +1124,7 @@ public class ViewFileSystem extends FileSystem {
 final long creationTime; // of the the mount table
 final UserGroupInformation ugi; // the user/group of user who created 
mtable
 final URI myUri;
+private final boolean showMountLinksAsSymlinks;
 
 public InternalDirOfViewFs(final InodeTree.INodeDir dir,
 final long cTime, final UserGroupInformation ugi, URI uri,
@@ -1127,6 +1138,9 @@ public class ViewFileSystem extends FileSystem {
   theInternalDir = dir;
   creationTime = cTime;
   this.ugi = ugi;
+  showMountLinksAsSymlinks = config
+ 

[hadoop] branch branch-3.3 updated: HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 120ee79  HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme 
in processPath. Contributed by Uma Maheswara Rao G.
120ee79 is described below

commit 120ee793fc4bcbf9d1945d5e38e3ad5b2b290a0e
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 12 14:32:19 2020 -0700

HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 785b1def959fab6b8b766410bcd240feee13)
---
 .../java/org/apache/hadoop/fs/shell/FsUsage.java   |   3 +-
 .../hadoop/fs/viewfs/ViewFileSystemUtil.java   |  14 +-
 ...ViewFileSystemOverloadSchemeWithFSCommands.java | 173 +
 3 files changed, 188 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
index 6596527..64aade3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
@@ -128,7 +128,8 @@ class FsUsage extends FsCommand {
 
 @Override
 protected void processPath(PathData item) throws IOException {
-  if (ViewFileSystemUtil.isViewFileSystem(item.fs)) {
+  if (ViewFileSystemUtil.isViewFileSystem(item.fs)
+  || ViewFileSystemUtil.isViewFileSystemOverloadScheme(item.fs)) {
 ViewFileSystem viewFileSystem = (ViewFileSystem) item.fs;
 Map  fsStatusMap =
 ViewFileSystemUtil.getStatus(viewFileSystem, item.path);
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
index c8a1d78..f486a10 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
@@ -52,6 +52,17 @@ public final class ViewFileSystemUtil {
   }
 
   /**
+   * Check if the FileSystem is a ViewFileSystemOverloadScheme.
+   *
+   * @param fileSystem
+   * @return true if the fileSystem is ViewFileSystemOverloadScheme
+   */
+  public static boolean isViewFileSystemOverloadScheme(
+  final FileSystem fileSystem) {
+return fileSystem instanceof ViewFileSystemOverloadScheme;
+  }
+
+  /**
* Get FsStatus for all ViewFsMountPoints matching path for the given
* ViewFileSystem.
*
@@ -93,7 +104,8 @@ public final class ViewFileSystemUtil {
*/
   public static Map getStatus(
   FileSystem fileSystem, Path path) throws IOException {
-if (!isViewFileSystem(fileSystem)) {
+if (!(isViewFileSystem(fileSystem)
+|| isViewFileSystemOverloadScheme(fileSystem))) {
   throw new UnsupportedFileSystemException("FileSystem '"
   + fileSystem.getUri() + "'is not a ViewFileSystem.");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
new file mode 100644
index 000..a974377
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
@@ -0,0 +1,173 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.List;
+import java.util.Scanner;
+
+imp

[hadoop] branch branch-3.3 updated: HDFS-15389. DFSAdmin should close filesystem and dfsadmin -setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by Ayush Saxena

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new bee2846  HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena
bee2846 is described below

commit bee2846bee4ae676bdc14585f8a3927a9dd7df37
Author: Ayush Saxena 
AuthorDate: Sat Jun 6 10:49:38 2020 +0530

HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena

(cherry picked from commit cc671b16f7b0b7c1ed7b41b96171653dc43cf670)
---
 .../java/org/apache/hadoop/hdfs/tools/DFSAdmin.java  | 13 +++--
 ...TestViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
index 6ab16c3..ec5fa0a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
@@ -479,9 +479,9 @@ public class DFSAdmin extends FsShell {
   public DFSAdmin(Configuration conf) {
 super(conf);
   }
-  
+
   protected DistributedFileSystem getDFS() throws IOException {
-return AdminHelper.getDFS(getConf());
+return AdminHelper.checkAndGetDFS(getFS(), getConf());
   }
   
   /**
@@ -1036,14 +1036,7 @@ public class DFSAdmin extends FsShell {
   System.err.println("Bandwidth should be a non-negative integer");
   return exitCode;
 }
-
-FileSystem fs = getFS();
-if (!(fs instanceof DistributedFileSystem)) {
-  System.err.println("FileSystem is " + fs.getUri());
-  return exitCode;
-}
-
-DistributedFileSystem dfs = (DistributedFileSystem) fs;
+DistributedFileSystem dfs = getDFS();
 try{
   dfs.setBalancerBandwidth(bandwidth);
   System.out.println("Balancer bandwidth is set to " + bandwidth);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
index 1961dc2..a9475dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
@@ -263,4 +263,24 @@ public class TestViewFileSystemOverloadSchemeWithDFSAdmin {
 assertOutMsg("Disallowing snapshot on / succeeded", 1);
 assertEquals(0, ret);
   }
+
+  /**
+   * Tests setBalancerBandwidth with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testSetBalancerBandwidth() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+final DFSAdmin dfsAdmin = new DFSAdmin(conf);
+redirectStream();
+int ret = ToolRunner.run(dfsAdmin,
+new String[] {"-fs", defaultFSURI.toString(), "-setBalancerBandwidth",
+"1000"});
+assertOutMsg("Balancer bandwidth is set to 1000", 0);
+assertEquals(0, ret);
+  }
 }
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 0b5e202  HDFS-15321. Make DFSAdmin tool to work with 
ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.
0b5e202 is described below

commit 0b5e202614f0bc20a0db6656f924fa4d2741d00c
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 2 11:09:26 2020 -0700

HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit ed83c865dd0b4e92f3f89f79543acc23792bb69c)
---
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  29 +++
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |   2 +-
 .../org/apache/hadoop/hdfs/tools/AdminHelper.java  |  25 +-
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java |  13 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 266 +
 5 files changed, 317 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index f5952d5..36f9cd1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.fs.viewfs;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
@@ -27,6 +28,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 
 /**
@@ -227,4 +229,31 @@ public class ViewFileSystemOverloadScheme extends 
ViewFileSystem {
 
   }
 
+  /**
+   * This is an admin only API to give access to its child raw file system, if
+   * the path is link. If the given path is an internal directory(path is from
+   * mount paths tree), it will initialize the file system of given path uri
+   * directly. If path cannot be resolved to any internal directory or link, it
+   * will throw NotInMountpointException. Please note, this API will not return
+   * chrooted file system. Instead, this API will get actual raw file system
+   * instances.
+   *
+   * @param path - fs uri path
+   * @param conf - configuration
+   * @throws IOException
+   */
+  public FileSystem getRawFileSystem(Path path, Configuration conf)
+  throws IOException {
+InodeTree.ResolveResult res;
+try {
+  res = fsState.resolve(getUriPath(path), true);
+  return res.isInternalDir() ? fsGetter().get(path.toUri(), conf)
+  : ((ChRootedFileSystem) res.targetFileSystem).getMyFs();
+} catch (FileNotFoundException e) {
+  // No link configured with passed path.
+  throw new NotInMountpointException(path,
+  "No link found for the given path.");
+}
+  }
+
 }
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
index f051c9c..efced73 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
@@ -192,7 +192,7 @@ public class ViewFsTestSetup {
* Adds the given mount links to the configuration. Mount link mappings are
* in sources, targets at their respective index locations.
*/
-  static void addMountLinksToConf(String mountTable, String[] sources,
+  public static void addMountLinksToConf(String mountTable, String[] sources,
   String[] targets, Configuration config) throws URISyntaxException {
 for (int i = 0; i < sources.length; i++) {
   String src = sources[i];
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
index 9cb646b..27cdf70 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
@@ -1,4 +1,5 @@
 /**
+
  * Licensed to the Apache Software Foundation (ASF) under one
  * or mor

[hadoop] branch branch-3.3 updated: HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 4185804  HDFS-15330. Document the ViewFSOverloadScheme details in 
ViewFS guide. Contributed by Uma Maheswara Rao G.
4185804 is described below

commit 418580446b65be3a0674762e76fc2cb9a1e5629a
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 5 10:58:21 2020 -0700

HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 76fa0222f0d2e2d92b4a1eedba8b3e38002e8c23)
---
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |  40 -
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|   6 +
 .../src/site/markdown/ViewFsOverloadScheme.md  | 163 +
 .../site/resources/images/ViewFSOverloadScheme.png | Bin 0 -> 190004 bytes
 hadoop-project/src/site/site.xml   |   1 +
 5 files changed, 209 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index bc5ac30..d199c06 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -693,4 +693,42 @@ Usage: `hdfs debug recoverLease -path  [-retries 
]`
 | [`-path` *path*] | HDFS path for which to recover the lease. |
 | [`-retries` *num-retries*] | Number of times the client will retry calling 
recoverLease. The default number of retries is 1. |
 
-Recover the lease on the specified path. The path must reside on an HDFS 
filesystem. The default number of retries is 1.
+Recover the lease on the specified path. The path must reside on an HDFS file 
system. The default number of retries is 1.
+
+dfsadmin with ViewFsOverloadScheme
+--
+
+Usage: `hdfs dfsadmin -fs  `
+
+| COMMAND\_OPTION | Description |
+|: |: |
+| `-fs` *child fs mount link URI* | Its a logical mount link path to child 
file system in ViewFS world. This uri typically formed as src mount link 
prefixed with fs.defaultFS. Please note, this is not an actual child file 
system uri, instead its a logical mount link uri pointing to actual child file 
system|
+
+Example command usage:
+   `hdfs dfsadmin -fs hdfs://nn1 -safemode enter`
+
+In ViewFsOverloadScheme, we may have multiple child file systems as mount 
point mappings as shown in [ViewFsOverloadScheme 
Guide](./ViewFsOverloadScheme.html). Here -fs option is an optional generic 
parameter supported by dfsadmin. When users want to execute commands on one of 
the child file system, they need to pass that file system mount mapping link 
uri to -fs option. Let's take an example mount link configuration and dfsadmin 
command below.
+
+Mount link:
+
+```xml
+
+  fs.defaultFS
+  hdfs://MyCluster1
+
+
+
+  fs.viewfs.mounttable.MyCluster1./user
+  hdfs://MyCluster2/user
+   hdfs://MyCluster2/user
+   mount link path: /user
+   mount link uri: hdfs://MyCluster1/user
+   mount target uri for /user: hdfs://MyCluster2/user -->
+
+```
+
+If user wants to talk to `hdfs://MyCluster2/`, then they can pass -fs option 
(`-fs hdfs://MyCluster1/user`)
+Since /user was mapped to a cluster `hdfs://MyCluster2/user`, dfsadmin resolve 
the passed (`-fs hdfs://MyCluster1/user`) to target fs 
(`hdfs://MyCluster2/user`).
+This way users can get the access to all hdfs child file systems in 
ViewFsOverloadScheme.
+If there is no `-fs` option provided, then it will try to connect to the 
configured fs.defaultFS cluster if a cluster running with the fs.defaultFS uri.
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
index f851ef6..52ad49c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
@@ -361,6 +361,12 @@ resume its work, it's a good idea to provision some sort 
of cron job to purge su
 
 Delegation tokens for the cluster to which you are submitting the job 
(including all mounted volumes for that cluster’s mount table), and for input 
and output paths to your map-reduce job (including all volumes mounted via 
mount tables for the specified input and output paths) are all handled 
automatically. In addition, there is a way to add additional delegation tokens 
to the base cluster configuration for special circumstances.
 
+Don't want to change scheme or difficult to copy mount-table configurations to 
all clients?
+---
+
+Please refer to the [View File System Overload Scheme 
Guide](./ViewFsOverloadScheme.html)
+
+
 Appendix: A Mount Table Configuration E

[hadoop] branch branch-3.3 updated: HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 8e71e85  HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's 
scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.
8e71e85 is described below

commit 8e71e85af70c17f2350f794f8bc2475eb1e3acea
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 21 21:34:58 2020 -0700

HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and 
target uris schemes are same. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 4734c77b4b64b7c6432da4cc32881aba85f94ea1)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  15 ++-
 .../java/org/apache/hadoop/fs/viewfs/FsGetter.java |  47 
 .../fs/viewfs/HCFSMountTableConfigLoader.java  |   3 +-
 .../org/apache/hadoop/fs/viewfs/NflyFSystem.java   |  29 -
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  24 +---
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |   1 -
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  28 -
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 121 +
 8 files changed, 230 insertions(+), 38 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 4c3dae9..6dd1f65 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -136,6 +136,17 @@ public class ConfigUtil {
   }
 
   /**
+   * Add nfly link to configuration for the given mount table.
+   */
+  public static void addLinkNfly(Configuration conf, String mountTableName,
+  String src, String settings, final String targets) {
+conf.set(
+getConfigViewFsPrefix(mountTableName) + "."
++ Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+targets);
+  }
+
+  /**
*
* @param conf
* @param mountTableName
@@ -149,9 +160,7 @@ public class ConfigUtil {
 settings = settings == null
 ? "minReplication=2,repairOnRead=true"
 : settings;
-
-conf.set(getConfigViewFsPrefix(mountTableName) + "." +
-Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+addLinkNfly(conf, mountTableName, src, settings,
 StringUtils.uriToString(targets));
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
new file mode 100644
index 000..071af11
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+/**
+ * File system instance getter.
+ */
+@Private
+class FsGetter {
+
+  /**
+   * Gets new file system instance of given uri.
+   */
+  public FileSystem getNewInstance(URI uri, Configuration conf)
+  throws IOException {
+return FileSystem.newInstance(uri, conf);
+  }
+
+  /**
+   * Gets file system instance of given uri.
+   */
+  public FileSystem get(URI uri, Configuration conf) throws IOException {
+return FileSystem.get(uri, conf);
+  }
+}
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
index c7e5aab..3968e36 1

[hadoop] branch branch-3.3 updated: HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root). Contributed by Abhishek Das.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 5b248de  HADOOP-17024. ListStatus on ViewFS root (ls "/") should list 
the linkFallBack root (configured target root). Contributed by Abhishek Das.
5b248de is described below

commit 5b248de42d2ae42710531a1514a21d60a1fca4b2
Author: Abhishek Das 
AuthorDate: Mon May 18 22:27:12 2020 -0700

HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the 
linkFallBack root (configured target root). Contributed by Abhishek Das.

(cherry picked from commit ce4ec7445345eb94c6741d416814a4eac319f0a6)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 49 ++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 51 ++-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 98 ++
 4 files changed, 209 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 6992343..50c839b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -123,6 +123,7 @@ abstract class InodeTree {
 private final Map> children = new HashMap<>();
 private T internalDirFs =  null; //filesystem of this internal directory
 private boolean isRoot = false;
+private INodeLink fallbackLink = null;
 
 INodeDir(final String pathToNode, final UserGroupInformation aUgi) {
   super(pathToNode, aUgi);
@@ -149,6 +150,17 @@ abstract class InodeTree {
   return isRoot;
 }
 
+INodeLink getFallbackLink() {
+  return fallbackLink;
+}
+
+void addFallbackLink(INodeLink link) throws IOException {
+  if (!isRoot) {
+throw new IOException("Fallback link can only be added for root");
+  }
+  this.fallbackLink = link;
+}
+
 Map> getChildren() {
   return Collections.unmodifiableMap(children);
 }
@@ -580,6 +592,7 @@ abstract class InodeTree {
 }
   }
   rootFallbackLink = fallbackLink;
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
 
 if (!gotMountTableEntry) {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 0acb04d..891a986 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1200,10 +1200,19 @@ public class ViewFileSystem extends FileSystem {
 }
 
 
+/**
+ * {@inheritDoc}
+ *
+ * Note: listStatus on root("/") considers listing from fallbackLink if
+ * available. If the same directory name is present in configured mount
+ * path as well as in fallback link, then only the configured mount path
+ * will be listed in the returned result.
+ */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
+  FileStatus[] fallbackStatuses = listStatusForFallbackLink();
   FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
   int i = 0;
   for (Entry> iEntry :
@@ -1226,7 +1235,45 @@ public class ViewFileSystem extends FileSystem {
 myUri, null));
 }
   }
-  return result;
+  if (fallbackStatuses.length > 0) {
+return consolidateFileStatuses(fallbackStatuses, result);
+  } else {
+return result;
+  }
+}
+
+private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
+FileStatus[] mountPointStatuses) {
+  ArrayList result = new ArrayList<>();
+  Set pathSet = new HashSet<>();
+  for (FileStatus status : mountPointStatuses) {
+result.add(status);
+pathSet.add(status.getPath().getName());
+  }
+  for (FileStatus status : fallbackStatuses) {
+if (!pathSet.contains(status.getPath().getName())) {
+  result.add(status);
+}
+  }
+  return result.toArray(new FileStatus[0]);
+}
+
+private FileStatus[] listStatusForFallbackLink() throws IOException {
+  if (theInternalDir.isRoot() &&
+  theInternalDir.getFallbackLink() != null) {
+FileSystem linkedFs =
+theInternalDir.getFallbackLink(

[hadoop] branch branch-3.3 updated: HDFS-15306. Make mount-table to read from central place ( Let's say from HDFS). Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 544996c  HDFS-15306. Make mount-table to read from central place ( 
Let's say from HDFS). Contributed by Uma Maheswara Rao G.
544996c is described below

commit 544996c85702af7ae241ef2f18e2597e2b4050be
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 14 17:29:35 2020 -0700

HDFS-15306. Make mount-table to read from central place ( Let's say from 
HDFS). Contributed by Uma Maheswara Rao G.

(cherry picked from commit ac4a2e11d98827c7926a34cda27aa7bcfd3f36c1)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   5 +
 .../fs/viewfs/HCFSMountTableConfigLoader.java  | 122 ++
 .../hadoop/fs/viewfs/MountTableConfigLoader.java   |  44 +
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 180 -
 .../org/apache/hadoop/fs/viewfs/package-info.java  |  26 +++
 .../fs/viewfs/TestHCFSMountTableConfigLoader.java  | 165 +++
 ...iewFSOverloadSchemeCentralMountTableConfig.java |  77 +
 ...iewFileSystemOverloadSchemeLocalFileSystem.java |  47 --
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  71 +++-
 ...FSOverloadSchemeWithMountTableConfigInHDFS.java |  68 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 125 +-
 11 files changed, 797 insertions(+), 133 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 37f1a16..0a5d4b4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -30,6 +30,11 @@ public interface Constants {
* Prefix for the config variable prefix for the ViewFs mount-table
*/
   public static final String CONFIG_VIEWFS_PREFIX = "fs.viewfs.mounttable";
+
+  /**
+   * Prefix for the config variable for the ViewFs mount-table path.
+   */
+  String CONFIG_VIEWFS_MOUNTTABLE_PATH = CONFIG_VIEWFS_PREFIX + ".path";
  
   /**
* Prefix for the home dir for the mount table - if not specified
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
new file mode 100644
index 000..c7e5aab
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation for Apache Hadoop compatible file system based mount-table
+ * file loading.
+ */
+public class HCFSMountTableConfigLoader implements MountTableConfigLoader {
+  private static final String REGEX_DOT = "[.]";
+  private static final Logger LOGGER =
+  LoggerFactory.getLogger(HCFSMountTableConfigLoader.class);
+  private Path mountTable = null;
+
+  /**
+   * Loads the mount-table configuration from hadoop compatible file system and
+   * add the configuration items to given configuration. Mount-table
+   * configuration format should be suffixed with version number.
+   * Format: mount-table..xml
+   * Example: mount-table.1.xml
+   * When user wants to update mount-table, the expectation is to upload new
+   * mount-table configuration file with monotonically increasing integer as
+   * version number. This API loads 

[hadoop] branch trunk updated: HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. Contributed by Uma Maheswara Rao G.

2020-06-12 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 785b1de  HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme 
in processPath. Contributed by Uma Maheswara Rao G.
785b1de is described below

commit 785b1def959fab6b8b766410bcd240feee13
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 12 14:32:19 2020 -0700

HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. 
Contributed by Uma Maheswara Rao G.
---
 .../java/org/apache/hadoop/fs/shell/FsUsage.java   |   3 +-
 .../hadoop/fs/viewfs/ViewFileSystemUtil.java   |  14 +-
 ...ViewFileSystemOverloadSchemeWithFSCommands.java | 173 +
 3 files changed, 188 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
index 6596527..64aade3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
@@ -128,7 +128,8 @@ class FsUsage extends FsCommand {
 
 @Override
 protected void processPath(PathData item) throws IOException {
-  if (ViewFileSystemUtil.isViewFileSystem(item.fs)) {
+  if (ViewFileSystemUtil.isViewFileSystem(item.fs)
+  || ViewFileSystemUtil.isViewFileSystemOverloadScheme(item.fs)) {
 ViewFileSystem viewFileSystem = (ViewFileSystem) item.fs;
 Map  fsStatusMap =
 ViewFileSystemUtil.getStatus(viewFileSystem, item.path);
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
index c8a1d78..f486a10 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
@@ -52,6 +52,17 @@ public final class ViewFileSystemUtil {
   }
 
   /**
+   * Check if the FileSystem is a ViewFileSystemOverloadScheme.
+   *
+   * @param fileSystem
+   * @return true if the fileSystem is ViewFileSystemOverloadScheme
+   */
+  public static boolean isViewFileSystemOverloadScheme(
+  final FileSystem fileSystem) {
+return fileSystem instanceof ViewFileSystemOverloadScheme;
+  }
+
+  /**
* Get FsStatus for all ViewFsMountPoints matching path for the given
* ViewFileSystem.
*
@@ -93,7 +104,8 @@ public final class ViewFileSystemUtil {
*/
   public static Map getStatus(
   FileSystem fileSystem, Path path) throws IOException {
-if (!isViewFileSystem(fileSystem)) {
+if (!(isViewFileSystem(fileSystem)
+|| isViewFileSystemOverloadScheme(fileSystem))) {
   throw new UnsupportedFileSystemException("FileSystem '"
   + fileSystem.getUri() + "'is not a ViewFileSystem.");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
new file mode 100644
index 000..a974377
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
@@ -0,0 +1,173 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.List;
+import java.util.Scanner;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.ha

  1   2   3   4   5   6   >