[hadoop] branch branch-3.3 updated: HDFS-16502. Reconfigure Block Invalidate limit (#4064)

2022-03-15 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 712d9be  HDFS-16502. Reconfigure Block Invalidate limit (#4064)
712d9be is described below

commit 712d9bece88ba45cc2b7b40c684e94f28ea6ae59
Author: Viraj Jasani 
AuthorDate: Wed Mar 16 07:02:29 2022 +0530

HDFS-16502. Reconfigure Block Invalidate limit (#4064)

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 1c0bc35305aea6ab8037241fab10862615f3e296)
---
 .../server/blockmanagement/DatanodeManager.java| 33 ++
 .../hadoop/hdfs/server/namenode/NameNode.java  | 27 +-
 .../server/namenode/TestNameNodeReconfigure.java   | 31 
 .../org/apache/hadoop/hdfs/tools/TestDFSAdmin.java | 21 --
 4 files changed, 91 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 005b45c..14ee733 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -314,18 +314,12 @@ public class DatanodeManager {
 DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_DEFAULT); // 5 
minutes
 this.heartbeatExpireInterval = 2 * heartbeatRecheckInterval
 + 10 * 1000 * heartbeatIntervalSeconds;
-// Effected block invalidate limit is the bigger value between
-// value configured in hdfs-site.xml, and 20 * HB interval.
 final int configuredBlockInvalidateLimit = conf.getInt(
 DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY,
 DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
-final int countedBlockInvalidateLimit = 20*(int)(heartbeatIntervalSeconds);
-this.blockInvalidateLimit = Math.max(countedBlockInvalidateLimit,
-configuredBlockInvalidateLimit);
-LOG.info(DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY
-+ ": configured=" + configuredBlockInvalidateLimit
-+ ", counted=" + countedBlockInvalidateLimit
-+ ", effected=" + blockInvalidateLimit);
+// Block invalidate limit also has some dependency on heartbeat interval.
+// Check setBlockInvalidateLimit().
+setBlockInvalidateLimit(configuredBlockInvalidateLimit);
 this.checkIpHostnameInRegistration = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_KEY,
 
DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_DEFAULT);
@@ -2088,8 +2082,25 @@ public class DatanodeManager {
 this.heartbeatRecheckInterval = recheckInterval;
 this.heartbeatExpireInterval = 2L * recheckInterval + 10 * 1000
 * intervalSeconds;
-this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
-blockInvalidateLimit);
+this.blockInvalidateLimit = getBlockInvalidateLimit(blockInvalidateLimit);
+  }
+
+  private int getBlockInvalidateLimitFromHBInterval() {
+return 20 * (int) heartbeatIntervalSeconds;
+  }
+
+  private int getBlockInvalidateLimit(int configuredBlockInvalidateLimit) {
+return Math.max(getBlockInvalidateLimitFromHBInterval(), 
configuredBlockInvalidateLimit);
+  }
+
+  public void setBlockInvalidateLimit(int configuredBlockInvalidateLimit) {
+final int countedBlockInvalidateLimit = 
getBlockInvalidateLimitFromHBInterval();
+// Effected block invalidate limit is the bigger value between
+// value configured in hdfs-site.xml, and 20 * HB interval.
+this.blockInvalidateLimit = 
getBlockInvalidateLimit(configuredBlockInvalidateLimit);
+LOG.info("{} : configured={}, counted={}, effected={}",
+DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY, 
configuredBlockInvalidateLimit,
+countedBlockInvalidateLimit, this.blockInvalidateLimit);
   }
 
   /**
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 79074a2..2ad7fed 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -120,6 +120,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_DEFAULT;
 import static 

[hadoop] branch trunk updated: HDFS-16502. Reconfigure Block Invalidate limit (#4064)

2022-03-15 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1c0bc35  HDFS-16502. Reconfigure Block Invalidate limit (#4064)
1c0bc35 is described below

commit 1c0bc35305aea6ab8037241fab10862615f3e296
Author: Viraj Jasani 
AuthorDate: Wed Mar 16 07:02:29 2022 +0530

HDFS-16502. Reconfigure Block Invalidate limit (#4064)

Signed-off-by: Wei-Chiu Chuang 
---
 .../server/blockmanagement/DatanodeManager.java| 33 ++
 .../hadoop/hdfs/server/namenode/NameNode.java  | 27 +-
 .../server/namenode/TestNameNodeReconfigure.java   | 31 
 .../org/apache/hadoop/hdfs/tools/TestDFSAdmin.java | 21 --
 4 files changed, 91 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index cfb1d83..cb601e9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -314,18 +314,12 @@ public class DatanodeManager {
 DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_DEFAULT); // 5 
minutes
 this.heartbeatExpireInterval = 2 * heartbeatRecheckInterval
 + 10 * 1000 * heartbeatIntervalSeconds;
-// Effected block invalidate limit is the bigger value between
-// value configured in hdfs-site.xml, and 20 * HB interval.
 final int configuredBlockInvalidateLimit = conf.getInt(
 DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY,
 DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
-final int countedBlockInvalidateLimit = 20*(int)(heartbeatIntervalSeconds);
-this.blockInvalidateLimit = Math.max(countedBlockInvalidateLimit,
-configuredBlockInvalidateLimit);
-LOG.info(DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY
-+ ": configured=" + configuredBlockInvalidateLimit
-+ ", counted=" + countedBlockInvalidateLimit
-+ ", effected=" + blockInvalidateLimit);
+// Block invalidate limit also has some dependency on heartbeat interval.
+// Check setBlockInvalidateLimit().
+setBlockInvalidateLimit(configuredBlockInvalidateLimit);
 this.checkIpHostnameInRegistration = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_KEY,
 
DFSConfigKeys.DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_DEFAULT);
@@ -2088,8 +2082,25 @@ public class DatanodeManager {
 this.heartbeatRecheckInterval = recheckInterval;
 this.heartbeatExpireInterval = 2L * recheckInterval + 10 * 1000
 * intervalSeconds;
-this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
-blockInvalidateLimit);
+this.blockInvalidateLimit = getBlockInvalidateLimit(blockInvalidateLimit);
+  }
+
+  private int getBlockInvalidateLimitFromHBInterval() {
+return 20 * (int) heartbeatIntervalSeconds;
+  }
+
+  private int getBlockInvalidateLimit(int configuredBlockInvalidateLimit) {
+return Math.max(getBlockInvalidateLimitFromHBInterval(), 
configuredBlockInvalidateLimit);
+  }
+
+  public void setBlockInvalidateLimit(int configuredBlockInvalidateLimit) {
+final int countedBlockInvalidateLimit = 
getBlockInvalidateLimitFromHBInterval();
+// Effected block invalidate limit is the bigger value between
+// value configured in hdfs-site.xml, and 20 * HB interval.
+this.blockInvalidateLimit = 
getBlockInvalidateLimit(configuredBlockInvalidateLimit);
+LOG.info("{} : configured={}, counted={}, effected={}",
+DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_KEY, 
configuredBlockInvalidateLimit,
+countedBlockInvalidateLimit, this.blockInvalidateLimit);
   }
 
   /**
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 8cd5d25..ef0eef8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -121,6 +121,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_DEFAULT;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_KEY;
+import static 

[hadoop] 01/03: HADOOP-13722. Code cleanup -- ViewFileSystem and InodeTree. Contributed by Manoj Govindassamy.

2022-03-15 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit e831f7b22895608125c9a9b14723112686a989aa
Author: Andrew Wang 
AuthorDate: Fri Feb 18 18:34:11 2022 -0800

HADOOP-13722. Code cleanup -- ViewFileSystem and InodeTree. Contributed by 
Manoj Govindassamy.

(cherry picked from commit 0f4afc81009129bbee89d5b6cf22c8dda612d223)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 198 ++---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  85 +
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  35 ++--
 3 files changed, 146 insertions(+), 172 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 779cec8..c9bdf63 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -6,9 +6,9 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -37,47 +37,45 @@ import org.apache.hadoop.fs.UnsupportedFileSystemException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.StringUtils;
 
-
 /**
  * InodeTree implements a mount-table as a tree of inodes.
  * It is used to implement ViewFs and ViewFileSystem.
  * In order to use it the caller must subclass it and implement
  * the abstract methods {@link #getTargetFileSystem(INodeDir)}, etc.
- * 
+ *
  * The mountable is initialized from the config variables as 
  * specified in {@link ViewFs}
  *
  * @param  is AbstractFileSystem or FileSystem
- * 
- * The three main methods are
- * {@link #InodeTreel(Configuration)} // constructor
+ *
+ * The two main methods are
  * {@link #InodeTree(Configuration, String)} // constructor
  * {@link #resolve(String, boolean)} 
  */
 
 @InterfaceAudience.Private
-@InterfaceStability.Unstable 
+@InterfaceStability.Unstable
 abstract class InodeTree {
-  static enum ResultKind {isInternalDir, isExternalDir;};
+  enum ResultKind {
+INTERNAL_DIR,
+EXTERNAL_DIR
+  }
+
   static final Path SlashPath = new Path("/");
-  
-  final INodeDir root; // the root of the mount table
-  
-  final String homedirPrefix; // the homedir config value for this mount table
-  
-  List> mountPoints = new ArrayList>();
-  
-  
+  private final INodeDir root; // the root of the mount table
+  private final String homedirPrefix; // the homedir for this mount table
+  private List> mountPoints = new ArrayList>();
+
   static class MountPoint {
 String src;
 INodeLink target;
+
 MountPoint(String srcPath, INodeLink mountLink) {
   src = srcPath;
   target = mountLink;
 }
-
   }
-  
+
   /**
* Breaks file path into component names.
* @param path
@@ -85,18 +83,19 @@ abstract class InodeTree {
*/
   static String[] breakIntoPathComponents(final String path) {
 return path == null ? null : path.split(Path.SEPARATOR);
-  } 
-  
+  }
+
   /**
* Internal class for inode tree
* @param 
*/
   abstract static class INode {
 final String fullPath; // the full path to the root
+
 public INode(String pathToNode, UserGroupInformation aUgi) {
   fullPath = pathToNode;
 }
-  };
+  }
 
   /**
* Internal class to represent an internal dir of the mount table
@@ -106,37 +105,28 @@ abstract class InodeTree {
 final Map> children = new HashMap>();
 T InodeDirFs =  null; // file system of this internal directory of mountT
 boolean isRoot = false;
-
+
 INodeDir(final String pathToNode, final UserGroupInformation aUgi) {
   super(pathToNode, aUgi);
 }
 
-INode resolve(final String pathComponent) throws FileNotFoundException {
-  final INode result = resolveInternal(pathComponent);
-  if (result == null) {
-throw new FileNotFoundException();
-  }
-  return result;
-}
-
 INode resolveInternal(final String pathComponent) {
   return children.get(pathComponent);
 }
-
+
 INodeDir addDir(final String pathComponent,
-final UserGroupInformation aUgi)
-  throws FileAlreadyExistsException {
+final UserGroupInformation aUgi) throws FileAlreadyExistsException {
   if (children.containsKey(pathComponent)) {
 throw new 

[hadoop] 03/03: HADOOP-13055. Implement linkMergeSlash and linkFallback for ViewFileSystem

2022-03-15 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ef8582bfa87c734f9251dd6d9ed57dd649646134
Author: Manoj Govindassamy 
AuthorDate: Fri Oct 13 17:43:13 2017 -0700

HADOOP-13055. Implement linkMergeSlash and linkFallback for ViewFileSystem

(cherry picked from commit 133d7ca76e3d4b60292d57429d4259e80bec650a)
Fixes #4015
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  68 +++-
 .../org/apache/hadoop/fs/viewfs/Constants.java |  16 +-
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 351 ++---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  13 +-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  14 +-
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |   4 +-
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|  44 ++-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 264 
 .../viewfs/TestViewFileSystemLinkMergeSlash.java   | 234 ++
 9 files changed, 940 insertions(+), 68 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 8acd41f..5867f62 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.fs.viewfs;
 
 import java.net.URI;
+import java.util.Arrays;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.util.StringUtils;
@@ -68,7 +69,72 @@ public class ConfigUtil {
 addLink( conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, 
 src, target);   
   }
-  
+
+  /**
+   * Add a LinkMergeSlash to the config for the specified mount table.
+   * @param conf
+   * @param mountTableName
+   * @param target
+   */
+  public static void addLinkMergeSlash(Configuration conf,
+  final String mountTableName, final URI target) {
+conf.set(getConfigViewFsPrefix(mountTableName) + "." +
+Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH, target.toString());
+  }
+
+  /**
+   * Add a LinkMergeSlash to the config for the default mount table.
+   * @param conf
+   * @param target
+   */
+  public static void addLinkMergeSlash(Configuration conf, final URI target) {
+addLinkMergeSlash(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
+target);
+  }
+
+  /**
+   * Add a LinkFallback to the config for the specified mount table.
+   * @param conf
+   * @param mountTableName
+   * @param target
+   */
+  public static void addLinkFallback(Configuration conf,
+  final String mountTableName, final URI target) {
+conf.set(getConfigViewFsPrefix(mountTableName) + "." +
+Constants.CONFIG_VIEWFS_LINK_FALLBACK, target.toString());
+  }
+
+  /**
+   * Add a LinkFallback to the config for the default mount table.
+   * @param conf
+   * @param target
+   */
+  public static void addLinkFallback(Configuration conf, final URI target) {
+addLinkFallback(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE,
+target);
+  }
+
+  /**
+   * Add a LinkMerge to the config for the specified mount table.
+   * @param conf
+   * @param mountTableName
+   * @param targets
+   */
+  public static void addLinkMerge(Configuration conf,
+  final String mountTableName, final URI[] targets) {
+conf.set(getConfigViewFsPrefix(mountTableName) + "." +
+Constants.CONFIG_VIEWFS_LINK_MERGE, Arrays.toString(targets));
+  }
+
+  /**
+   * Add a LinkMerge to the config for the default mount table.
+   * @param conf
+   * @param targets
+   */
+  public static void addLinkMerge(Configuration conf, final URI[] targets) {
+addLinkMerge(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, targets);
+  }
+
   /**
*
* @param conf
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 3f9aae2..7a0a6661 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -51,12 +51,17 @@ public interface Constants {
   /**
* Config variable for specifying a simple link
*/
-  public static final String CONFIG_VIEWFS_LINK = "link";
-  
+  String CONFIG_VIEWFS_LINK = "link";
+
+  /**
+   * Config variable for specifying a fallback for link mount points.
+   */
+  String CONFIG_VIEWFS_LINK_FALLBACK = "linkFallback";
+
   /**
* Config variable for specifying a merge link
*/
-  public static final String CONFIG_VIEWFS_LINK_MERGE = "linkMerge";
+  String CONFIG_VIEWFS_LINK_MERGE = 

[hadoop] 02/03: HADOOP-12077. Provide a multi-URI replication Inode for ViewFs. Contributed by Gera Shegalov

2022-03-15 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 20ff29471480fc9c74e6cf78a0ad152a6ba0772f
Author: Chris Douglas 
AuthorDate: Tue Sep 5 23:30:18 2017 -0700

HADOOP-12077. Provide a multi-URI replication Inode for ViewFs. Contributed 
by Gera Shegalov

(cherry picked from commit 1f3bc63e6772be81bc9a6a7d93ed81d2a9e066c0)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  27 +
 .../org/apache/hadoop/fs/viewfs/Constants.java |   8 +-
 .../org/apache/hadoop/fs/viewfs/InodeTree.java |  62 +-
 .../org/apache/hadoop/fs/viewfs/NflyFSystem.java   | 951 +
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  34 +-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |   7 +-
 .../viewfs/TestViewFileSystemLocalFileSystem.java  |  77 +-
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  10 +-
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java   | 147 +++-
 9 files changed, 1270 insertions(+), 53 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index bb941c7..8acd41f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.fs.viewfs;
 import java.net.URI;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Utilities for config variables of the viewFs See {@link ViewFs}
@@ -69,6 +70,32 @@ public class ConfigUtil {
   }
   
   /**
+   *
+   * @param conf
+   * @param mountTableName
+   * @param src
+   * @param settings
+   * @param targets
+   */
+  public static void addLinkNfly(Configuration conf, String mountTableName,
+  String src, String settings, final URI ... targets) {
+
+settings = settings == null
+? "minReplication=2,repairOnRead=true"
+: settings;
+
+conf.set(getConfigViewFsPrefix(mountTableName) + "." +
+Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+StringUtils.uriToString(targets));
+  }
+
+  public static void addLinkNfly(final Configuration conf, final String src,
+  final URI ... targets) {
+addLinkNfly(conf, Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, src, null,
+targets);
+  }
+
+  /**
* Add config variable for homedir for default mount table
* @param conf - add to this conf
* @param homedir - the home dir path starting with slash
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 0c0e8a3..3f9aae2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -57,7 +57,13 @@ public interface Constants {
* Config variable for specifying a merge link
*/
   public static final String CONFIG_VIEWFS_LINK_MERGE = "linkMerge";
-  
+
+  /**
+   * Config variable for specifying an nfly link. Nfly writes to multiple
+   * locations, and allows reads from the closest one.
+   */
+  String CONFIG_VIEWFS_LINK_NFLY = "linkNfly";
+
   /**
* Config variable for specifying a merge of the root of the mount-table
*  with the root of another file system. 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index c9bdf63..199ccc6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -134,6 +134,12 @@ abstract class InodeTree {
 }
   }
 
+  enum LinkType {
+SINGLE,
+MERGE,
+NFLY
+  }
+
   /**
* An internal class to represent a mount link.
* A mount link can be single dir link or a merge dir link.
@@ -147,7 +153,6 @@ abstract class InodeTree {
* is changed later it is then ignored (a dir with null entries)
*/
   static class INodeLink extends INode {
-final boolean isMergeLink; // true if MergeLink
 final URI[] targetDirLinkList;
 private T targetFileSystem;   // file system object created from the link.
 // Function to initialize file system. Only applicable for simple links
@@ -155,14 +160,13 @@ abstract class InodeTree {
 private final Object lock = new Object();
 
 /**
- * Construct a mergeLink.
+ * Construct a mergeLink or nfly.
  */
 

[hadoop] branch branch-2.10 updated (efe515d -> ef8582b)

2022-03-15 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from efe515d  HADOOP-18158. Fix failure of create-release script due to 
releasedocmaker changes in branch-2.10 (#4055)
 new e831f7b  HADOOP-13722. Code cleanup -- ViewFileSystem and InodeTree. 
Contributed by Manoj Govindassamy.
 new 20ff294  HADOOP-12077. Provide a multi-URI replication Inode for 
ViewFs. Contributed by Gera Shegalov
 new ef8582b  HADOOP-13055. Implement linkMergeSlash and linkFallback for 
ViewFileSystem

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  95 +-
 .../org/apache/hadoop/fs/viewfs/Constants.java |  24 +-
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 573 +
 .../org/apache/hadoop/fs/viewfs/NflyFSystem.java   | 951 +
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 132 +--
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   |  21 +-
 .../viewfs/TestViewFileSystemLocalFileSystem.java  |  77 +-
 .../apache/hadoop/fs/viewfs/TestViewFsConfig.java  |  37 +-
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |   4 +-
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|  44 +-
 .../hadoop/fs/viewfs/TestViewFileSystemHdfs.java   | 147 +++-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 264 ++
 .../viewfs/TestViewFileSystemLinkMergeSlash.java   | 234 +
 13 files changed, 2333 insertions(+), 270 deletions(-)
 create mode 100644 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/NflyFSystem.java
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
 create mode 100644 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkMergeSlash.java

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org