[hadoop] branch branch-3.3.0 updated: YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars (#2075)

2020-06-16 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a commit to branch branch-3.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3.0 by this push:
 new 8382e31  YARN-10314. YarnClient throws NoClassDefFoundError for 
WebSocketException with only shaded client jars (#2075)
8382e31 is described below

commit 8382e31c0c33c3d69aff8690adc7c1bbe5137ee6
Author: Vinayakumar B 
AuthorDate: Wed Jun 17 09:26:41 2020 +0530

YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException 
with only shaded client jars (#2075)
---
 hadoop-client-modules/hadoop-client-minicluster/pom.xml | 16 +---
 hadoop-client-modules/hadoop-client-runtime/pom.xml | 11 +++
 2 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml 
b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 52595d9..dd954d3 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -811,15 +811,25 @@
 */**
   
 
-
+
 
   org.eclipse.jetty:jetty-client
   
 */**
   
 
+
+  org.eclipse.jetty:jetty-xml
+  
+*/**
+  
+
+
+  org.eclipse.jetty:jetty-http
+  
+*/**
+  
+
   
 
   
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml 
b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index 4960235..bf5e527 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -158,12 +158,8 @@
   
   com.google.code.findbugs:jsr305
   io.dropwizard.metrics:metrics-core
-  org.eclipse.jetty.websocket:*
   org.eclipse.jetty:jetty-servlet
   org.eclipse.jetty:jetty-security
-  org.eclipse.jetty:jetty-client
-  org.eclipse.jetty:jetty-http
-  org.eclipse.jetty:jetty-xml
   org.ow2.asm:*
   
   org.bouncycastle:*
@@ -214,6 +210,13 @@
   
 
 
+  
+  org.eclipse.jetty.websocket:*
+  
+about.html
+  
+
+
   
   org.apache.kerby:kerb-util
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars (#2075)

2020-06-16 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new c1ef247  YARN-10314. YarnClient throws NoClassDefFoundError for 
WebSocketException with only shaded client jars (#2075)
c1ef247 is described below

commit c1ef247dc694097533a6bda4697f593deab2afb1
Author: Vinayakumar B 
AuthorDate: Wed Jun 17 09:26:41 2020 +0530

YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException 
with only shaded client jars (#2075)
---
 hadoop-client-modules/hadoop-client-minicluster/pom.xml | 16 +---
 hadoop-client-modules/hadoop-client-runtime/pom.xml | 11 +++
 2 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml 
b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 6c8bc21..48b6619 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -811,15 +811,25 @@
 */**
   
 
-
+
 
   org.eclipse.jetty:jetty-client
   
 */**
   
 
+
+  org.eclipse.jetty:jetty-xml
+  
+*/**
+  
+
+
+  org.eclipse.jetty:jetty-http
+  
+*/**
+  
+
   
 
   
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml 
b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index 5e00f9f..80bd1ee 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -158,12 +158,8 @@
   
   com.google.code.findbugs:jsr305
   io.dropwizard.metrics:metrics-core
-  org.eclipse.jetty.websocket:*
   org.eclipse.jetty:jetty-servlet
   org.eclipse.jetty:jetty-security
-  org.eclipse.jetty:jetty-client
-  org.eclipse.jetty:jetty-http
-  org.eclipse.jetty:jetty-xml
   org.ow2.asm:*
   
   org.bouncycastle:*
@@ -214,6 +210,13 @@
   
 
 
+  
+  org.eclipse.jetty.websocket:*
+  
+about.html
+  
+
+
   
   org.apache.kerby:kerb-util
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException with only shaded client jars (#2075)

2020-06-16 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fc4ebb0  YARN-10314. YarnClient throws NoClassDefFoundError for 
WebSocketException with only shaded client jars (#2075)
fc4ebb0 is described below

commit fc4ebb0499fe1095b87ff782c265e9afce154266
Author: Vinayakumar B 
AuthorDate: Wed Jun 17 09:26:41 2020 +0530

YARN-10314. YarnClient throws NoClassDefFoundError for WebSocketException 
with only shaded client jars (#2075)
---
 hadoop-client-modules/hadoop-client-minicluster/pom.xml | 16 +---
 hadoop-client-modules/hadoop-client-runtime/pom.xml | 11 +++
 2 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml 
b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index b447eed..f66528d 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -811,15 +811,25 @@
 */**
   
 
-
+
 
   org.eclipse.jetty:jetty-client
   
 */**
   
 
+
+  org.eclipse.jetty:jetty-xml
+  
+*/**
+  
+
+
+  org.eclipse.jetty:jetty-http
+  
+*/**
+  
+
   
 
   
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml 
b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index fe95ed8..9a1efff 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -158,12 +158,8 @@
   
   com.google.code.findbugs:jsr305
   io.dropwizard.metrics:metrics-core
-  org.eclipse.jetty.websocket:*
   org.eclipse.jetty:jetty-servlet
   org.eclipse.jetty:jetty-security
-  org.eclipse.jetty:jetty-client
-  org.eclipse.jetty:jetty-http
-  org.eclipse.jetty:jetty-xml
   org.ow2.asm:*
   
   org.bouncycastle:*
@@ -214,6 +210,13 @@
   
 
 
+  
+  org.eclipse.jetty.websocket:*
+  
+about.html
+  
+
+
   
   org.apache.kerby:kerb-util
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 120ee79  HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme 
in processPath. Contributed by Uma Maheswara Rao G.
120ee79 is described below

commit 120ee793fc4bcbf9d1945d5e38e3ad5b2b290a0e
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 12 14:32:19 2020 -0700

HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 785b1def959fab6b8b766410bcd240feee13)
---
 .../java/org/apache/hadoop/fs/shell/FsUsage.java   |   3 +-
 .../hadoop/fs/viewfs/ViewFileSystemUtil.java   |  14 +-
 ...ViewFileSystemOverloadSchemeWithFSCommands.java | 173 +
 3 files changed, 188 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
index 6596527..64aade3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
@@ -128,7 +128,8 @@ class FsUsage extends FsCommand {
 
 @Override
 protected void processPath(PathData item) throws IOException {
-  if (ViewFileSystemUtil.isViewFileSystem(item.fs)) {
+  if (ViewFileSystemUtil.isViewFileSystem(item.fs)
+  || ViewFileSystemUtil.isViewFileSystemOverloadScheme(item.fs)) {
 ViewFileSystem viewFileSystem = (ViewFileSystem) item.fs;
 Map  fsStatusMap =
 ViewFileSystemUtil.getStatus(viewFileSystem, item.path);
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
index c8a1d78..f486a10 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemUtil.java
@@ -52,6 +52,17 @@ public final class ViewFileSystemUtil {
   }
 
   /**
+   * Check if the FileSystem is a ViewFileSystemOverloadScheme.
+   *
+   * @param fileSystem
+   * @return true if the fileSystem is ViewFileSystemOverloadScheme
+   */
+  public static boolean isViewFileSystemOverloadScheme(
+  final FileSystem fileSystem) {
+return fileSystem instanceof ViewFileSystemOverloadScheme;
+  }
+
+  /**
* Get FsStatus for all ViewFsMountPoints matching path for the given
* ViewFileSystem.
*
@@ -93,7 +104,8 @@ public final class ViewFileSystemUtil {
*/
   public static Map getStatus(
   FileSystem fileSystem, Path path) throws IOException {
-if (!isViewFileSystem(fileSystem)) {
+if (!(isViewFileSystem(fileSystem)
+|| isViewFileSystemOverloadScheme(fileSystem))) {
   throw new UnsupportedFileSystemException("FileSystem '"
   + fileSystem.getUri() + "'is not a ViewFileSystem.");
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
new file mode 100644
index 000..a974377
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithFSCommands.java
@@ -0,0 +1,173 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.tools;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.List;
+import java.util.Scanner;
+
+import 

[hadoop] branch branch-3.3 updated: HDFS-15389. DFSAdmin should close filesystem and dfsadmin -setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by Ayush Saxena

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new bee2846  HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena
bee2846 is described below

commit bee2846bee4ae676bdc14585f8a3927a9dd7df37
Author: Ayush Saxena 
AuthorDate: Sat Jun 6 10:49:38 2020 +0530

HDFS-15389. DFSAdmin should close filesystem and dfsadmin 
-setBalancerBandwidth should work with ViewFSOverloadScheme. Contributed by 
Ayush Saxena

(cherry picked from commit cc671b16f7b0b7c1ed7b41b96171653dc43cf670)
---
 .../java/org/apache/hadoop/hdfs/tools/DFSAdmin.java  | 13 +++--
 ...TestViewFileSystemOverloadSchemeWithDFSAdmin.java | 20 
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
index 6ab16c3..ec5fa0a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
@@ -479,9 +479,9 @@ public class DFSAdmin extends FsShell {
   public DFSAdmin(Configuration conf) {
 super(conf);
   }
-  
+
   protected DistributedFileSystem getDFS() throws IOException {
-return AdminHelper.getDFS(getConf());
+return AdminHelper.checkAndGetDFS(getFS(), getConf());
   }
   
   /**
@@ -1036,14 +1036,7 @@ public class DFSAdmin extends FsShell {
   System.err.println("Bandwidth should be a non-negative integer");
   return exitCode;
 }
-
-FileSystem fs = getFS();
-if (!(fs instanceof DistributedFileSystem)) {
-  System.err.println("FileSystem is " + fs.getUri());
-  return exitCode;
-}
-
-DistributedFileSystem dfs = (DistributedFileSystem) fs;
+DistributedFileSystem dfs = getDFS();
 try{
   dfs.setBalancerBandwidth(bandwidth);
   System.out.println("Balancer bandwidth is set to " + bandwidth);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
index 1961dc2..a9475dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java
@@ -263,4 +263,24 @@ public class TestViewFileSystemOverloadSchemeWithDFSAdmin {
 assertOutMsg("Disallowing snapshot on / succeeded", 1);
 assertEquals(0, ret);
   }
+
+  /**
+   * Tests setBalancerBandwidth with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testSetBalancerBandwidth() throws Exception {
+final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+addMountLinks(defaultFSURI.getAuthority(),
+new String[] {HDFS_USER_FOLDER, LOCAL_FOLDER },
+new String[] {hdfsTargetPath.toUri().toString(),
+localTargetDir.toURI().toString() },
+conf);
+final DFSAdmin dfsAdmin = new DFSAdmin(conf);
+redirectStream();
+int ret = ToolRunner.run(dfsAdmin,
+new String[] {"-fs", defaultFSURI.toString(), "-setBalancerBandwidth",
+"1000"});
+assertOutMsg("Balancer bandwidth is set to 1000", 0);
+assertEquals(0, ret);
+  }
 }
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 0b5e202  HDFS-15321. Make DFSAdmin tool to work with 
ViewFileSystemOverloadScheme. Contributed by Uma Maheswara Rao G.
0b5e202 is described below

commit 0b5e202614f0bc20a0db6656f924fa4d2741d00c
Author: Uma Maheswara Rao G 
AuthorDate: Tue Jun 2 11:09:26 2020 -0700

HDFS-15321. Make DFSAdmin tool to work with ViewFileSystemOverloadScheme. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit ed83c865dd0b4e92f3f89f79543acc23792bb69c)
---
 .../fs/viewfs/ViewFileSystemOverloadScheme.java|  29 +++
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |   2 +-
 .../org/apache/hadoop/hdfs/tools/AdminHelper.java  |  25 +-
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java |  13 +-
 ...stViewFileSystemOverloadSchemeWithDFSAdmin.java | 266 +
 5 files changed, 317 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
index f5952d5..36f9cd1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.fs.viewfs;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
@@ -27,6 +28,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnsupportedFileSystemException;
 
 /**
@@ -227,4 +229,31 @@ public class ViewFileSystemOverloadScheme extends 
ViewFileSystem {
 
   }
 
+  /**
+   * This is an admin only API to give access to its child raw file system, if
+   * the path is link. If the given path is an internal directory(path is from
+   * mount paths tree), it will initialize the file system of given path uri
+   * directly. If path cannot be resolved to any internal directory or link, it
+   * will throw NotInMountpointException. Please note, this API will not return
+   * chrooted file system. Instead, this API will get actual raw file system
+   * instances.
+   *
+   * @param path - fs uri path
+   * @param conf - configuration
+   * @throws IOException
+   */
+  public FileSystem getRawFileSystem(Path path, Configuration conf)
+  throws IOException {
+InodeTree.ResolveResult res;
+try {
+  res = fsState.resolve(getUriPath(path), true);
+  return res.isInternalDir() ? fsGetter().get(path.toUri(), conf)
+  : ((ChRootedFileSystem) res.targetFileSystem).getMyFs();
+} catch (FileNotFoundException e) {
+  // No link configured with passed path.
+  throw new NotInMountpointException(path,
+  "No link found for the given path.");
+}
+  }
+
 }
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
index f051c9c..efced73 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java
@@ -192,7 +192,7 @@ public class ViewFsTestSetup {
* Adds the given mount links to the configuration. Mount link mappings are
* in sources, targets at their respective index locations.
*/
-  static void addMountLinksToConf(String mountTable, String[] sources,
+  public static void addMountLinksToConf(String mountTable, String[] sources,
   String[] targets, Configuration config) throws URISyntaxException {
 for (int i = 0; i < sources.length; i++) {
   String src = sources[i];
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
index 9cb646b..27cdf70 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/AdminHelper.java
@@ -1,4 +1,5 @@
 /**
+
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor 

[hadoop] branch branch-3.3 updated: HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 4185804  HDFS-15330. Document the ViewFSOverloadScheme details in 
ViewFS guide. Contributed by Uma Maheswara Rao G.
4185804 is described below

commit 418580446b65be3a0674762e76fc2cb9a1e5629a
Author: Uma Maheswara Rao G 
AuthorDate: Fri Jun 5 10:58:21 2020 -0700

HDFS-15330. Document the ViewFSOverloadScheme details in ViewFS guide. 
Contributed by Uma Maheswara Rao G.

(cherry picked from commit 76fa0222f0d2e2d92b4a1eedba8b3e38002e8c23)
---
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |  40 -
 .../hadoop-hdfs/src/site/markdown/ViewFs.md|   6 +
 .../src/site/markdown/ViewFsOverloadScheme.md  | 163 +
 .../site/resources/images/ViewFSOverloadScheme.png | Bin 0 -> 190004 bytes
 hadoop-project/src/site/site.xml   |   1 +
 5 files changed, 209 insertions(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index bc5ac30..d199c06 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -693,4 +693,42 @@ Usage: `hdfs debug recoverLease -path  [-retries 
]`
 | [`-path` *path*] | HDFS path for which to recover the lease. |
 | [`-retries` *num-retries*] | Number of times the client will retry calling 
recoverLease. The default number of retries is 1. |
 
-Recover the lease on the specified path. The path must reside on an HDFS 
filesystem. The default number of retries is 1.
+Recover the lease on the specified path. The path must reside on an HDFS file 
system. The default number of retries is 1.
+
+dfsadmin with ViewFsOverloadScheme
+--
+
+Usage: `hdfs dfsadmin -fs  `
+
+| COMMAND\_OPTION | Description |
+|: |: |
+| `-fs` *child fs mount link URI* | Its a logical mount link path to child 
file system in ViewFS world. This uri typically formed as src mount link 
prefixed with fs.defaultFS. Please note, this is not an actual child file 
system uri, instead its a logical mount link uri pointing to actual child file 
system|
+
+Example command usage:
+   `hdfs dfsadmin -fs hdfs://nn1 -safemode enter`
+
+In ViewFsOverloadScheme, we may have multiple child file systems as mount 
point mappings as shown in [ViewFsOverloadScheme 
Guide](./ViewFsOverloadScheme.html). Here -fs option is an optional generic 
parameter supported by dfsadmin. When users want to execute commands on one of 
the child file system, they need to pass that file system mount mapping link 
uri to -fs option. Let's take an example mount link configuration and dfsadmin 
command below.
+
+Mount link:
+
+```xml
+
+  fs.defaultFS
+  hdfs://MyCluster1
+
+
+
+  fs.viewfs.mounttable.MyCluster1./user
+  hdfs://MyCluster2/user
+   hdfs://MyCluster2/user
+   mount link path: /user
+   mount link uri: hdfs://MyCluster1/user
+   mount target uri for /user: hdfs://MyCluster2/user -->
+
+```
+
+If user wants to talk to `hdfs://MyCluster2/`, then they can pass -fs option 
(`-fs hdfs://MyCluster1/user`)
+Since /user was mapped to a cluster `hdfs://MyCluster2/user`, dfsadmin resolve 
the passed (`-fs hdfs://MyCluster1/user`) to target fs 
(`hdfs://MyCluster2/user`).
+This way users can get the access to all hdfs child file systems in 
ViewFsOverloadScheme.
+If there is no `-fs` option provided, then it will try to connect to the 
configured fs.defaultFS cluster if a cluster running with the fs.defaultFS uri.
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
index f851ef6..52ad49c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
@@ -361,6 +361,12 @@ resume its work, it's a good idea to provision some sort 
of cron job to purge su
 
 Delegation tokens for the cluster to which you are submitting the job 
(including all mounted volumes for that cluster’s mount table), and for input 
and output paths to your map-reduce job (including all volumes mounted via 
mount tables for the specified input and output paths) are all handled 
automatically. In addition, there is a way to add additional delegation tokens 
to the base cluster configuration for special circumstances.
 
+Don't want to change scheme or difficult to copy mount-table configurations to 
all clients?
+---
+
+Please refer to the [View File System Overload Scheme 
Guide](./ViewFsOverloadScheme.html)
+
+
 Appendix: A Mount Table Configuration Example
 

[hadoop] branch branch-3.3 updated: HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 8e71e85  HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's 
scheme and target uris schemes are same. Contributed by Uma Maheswara Rao G.
8e71e85 is described below

commit 8e71e85af70c17f2350f794f8bc2475eb1e3acea
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 21 21:34:58 2020 -0700

HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and 
target uris schemes are same. Contributed by Uma Maheswara Rao G.

(cherry picked from commit 4734c77b4b64b7c6432da4cc32881aba85f94ea1)
---
 .../org/apache/hadoop/fs/viewfs/ConfigUtil.java|  15 ++-
 .../java/org/apache/hadoop/fs/viewfs/FsGetter.java |  47 
 .../fs/viewfs/HCFSMountTableConfigLoader.java  |   3 +-
 .../org/apache/hadoop/fs/viewfs/NflyFSystem.java   |  29 -
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  24 +---
 .../hadoop/fs/viewfs/ViewFileSystemBaseTest.java   |   1 -
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  28 -
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 121 +
 8 files changed, 230 insertions(+), 38 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
index 4c3dae9..6dd1f65 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
@@ -136,6 +136,17 @@ public class ConfigUtil {
   }
 
   /**
+   * Add nfly link to configuration for the given mount table.
+   */
+  public static void addLinkNfly(Configuration conf, String mountTableName,
+  String src, String settings, final String targets) {
+conf.set(
+getConfigViewFsPrefix(mountTableName) + "."
++ Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+targets);
+  }
+
+  /**
*
* @param conf
* @param mountTableName
@@ -149,9 +160,7 @@ public class ConfigUtil {
 settings = settings == null
 ? "minReplication=2,repairOnRead=true"
 : settings;
-
-conf.set(getConfigViewFsPrefix(mountTableName) + "." +
-Constants.CONFIG_VIEWFS_LINK_NFLY + "." + settings + "." + src,
+addLinkNfly(conf, mountTableName, src, settings,
 StringUtils.uriToString(targets));
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
new file mode 100644
index 000..071af11
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+/**
+ * File system instance getter.
+ */
+@Private
+class FsGetter {
+
+  /**
+   * Gets new file system instance of given uri.
+   */
+  public FileSystem getNewInstance(URI uri, Configuration conf)
+  throws IOException {
+return FileSystem.newInstance(uri, conf);
+  }
+
+  /**
+   * Gets file system instance of given uri.
+   */
+  public FileSystem get(URI uri, Configuration conf) throws IOException {
+return FileSystem.get(uri, conf);
+  }
+}
\ No newline at end of file
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
index c7e5aab..3968e36 100644
--- 

[hadoop] branch branch-3.3 updated: HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the linkFallBack root (configured target root). Contributed by Abhishek Das.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 5b248de  HADOOP-17024. ListStatus on ViewFS root (ls "/") should list 
the linkFallBack root (configured target root). Contributed by Abhishek Das.
5b248de is described below

commit 5b248de42d2ae42710531a1514a21d60a1fca4b2
Author: Abhishek Das 
AuthorDate: Mon May 18 22:27:12 2020 -0700

HADOOP-17024. ListStatus on ViewFS root (ls "/") should list the 
linkFallBack root (configured target root). Contributed by Abhishek Das.

(cherry picked from commit ce4ec7445345eb94c6741d416814a4eac319f0a6)
---
 .../org/apache/hadoop/fs/viewfs/InodeTree.java | 13 +++
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java| 49 ++-
 .../java/org/apache/hadoop/fs/viewfs/ViewFs.java   | 51 ++-
 .../fs/viewfs/TestViewFileSystemLinkFallback.java  | 98 ++
 4 files changed, 209 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
index 6992343..50c839b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
@@ -123,6 +123,7 @@ abstract class InodeTree {
 private final Map> children = new HashMap<>();
 private T internalDirFs =  null; //filesystem of this internal directory
 private boolean isRoot = false;
+private INodeLink fallbackLink = null;
 
 INodeDir(final String pathToNode, final UserGroupInformation aUgi) {
   super(pathToNode, aUgi);
@@ -149,6 +150,17 @@ abstract class InodeTree {
   return isRoot;
 }
 
+INodeLink getFallbackLink() {
+  return fallbackLink;
+}
+
+void addFallbackLink(INodeLink link) throws IOException {
+  if (!isRoot) {
+throw new IOException("Fallback link can only be added for root");
+  }
+  this.fallbackLink = link;
+}
+
 Map> getChildren() {
   return Collections.unmodifiableMap(children);
 }
@@ -580,6 +592,7 @@ abstract class InodeTree {
 }
   }
   rootFallbackLink = fallbackLink;
+  getRootDir().addFallbackLink(rootFallbackLink);
 }
 
 if (!gotMountTableEntry) {
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 0acb04d..891a986 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1200,10 +1200,19 @@ public class ViewFileSystem extends FileSystem {
 }
 
 
+/**
+ * {@inheritDoc}
+ *
+ * Note: listStatus on root("/") considers listing from fallbackLink if
+ * available. If the same directory name is present in configured mount
+ * path as well as in fallback link, then only the configured mount path
+ * will be listed in the returned result.
+ */
 @Override
 public FileStatus[] listStatus(Path f) throws AccessControlException,
 FileNotFoundException, IOException {
   checkPathIsSlash(f);
+  FileStatus[] fallbackStatuses = listStatusForFallbackLink();
   FileStatus[] result = new 
FileStatus[theInternalDir.getChildren().size()];
   int i = 0;
   for (Entry> iEntry :
@@ -1226,7 +1235,45 @@ public class ViewFileSystem extends FileSystem {
 myUri, null));
 }
   }
-  return result;
+  if (fallbackStatuses.length > 0) {
+return consolidateFileStatuses(fallbackStatuses, result);
+  } else {
+return result;
+  }
+}
+
+private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
+FileStatus[] mountPointStatuses) {
+  ArrayList result = new ArrayList<>();
+  Set pathSet = new HashSet<>();
+  for (FileStatus status : mountPointStatuses) {
+result.add(status);
+pathSet.add(status.getPath().getName());
+  }
+  for (FileStatus status : fallbackStatuses) {
+if (!pathSet.contains(status.getPath().getName())) {
+  result.add(status);
+}
+  }
+  return result.toArray(new FileStatus[0]);
+}
+
+private FileStatus[] listStatusForFallbackLink() throws IOException {
+  if (theInternalDir.isRoot() &&
+  theInternalDir.getFallbackLink() != null) {
+FileSystem linkedFs =
+theInternalDir.getFallbackLink().getTargetFileSystem();
+// Fallback link is only applicable for root

[hadoop] branch branch-3.3 updated: HDFS-15306. Make mount-table to read from central place ( Let's say from HDFS). Contributed by Uma Maheswara Rao G.

2020-06-16 Thread umamahesh
This is an automated email from the ASF dual-hosted git repository.

umamahesh pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 544996c  HDFS-15306. Make mount-table to read from central place ( 
Let's say from HDFS). Contributed by Uma Maheswara Rao G.
544996c is described below

commit 544996c85702af7ae241ef2f18e2597e2b4050be
Author: Uma Maheswara Rao G 
AuthorDate: Thu May 14 17:29:35 2020 -0700

HDFS-15306. Make mount-table to read from central place ( Let's say from 
HDFS). Contributed by Uma Maheswara Rao G.

(cherry picked from commit ac4a2e11d98827c7926a34cda27aa7bcfd3f36c1)
---
 .../org/apache/hadoop/fs/viewfs/Constants.java |   5 +
 .../fs/viewfs/HCFSMountTableConfigLoader.java  | 122 ++
 .../hadoop/fs/viewfs/MountTableConfigLoader.java   |  44 +
 .../fs/viewfs/ViewFileSystemOverloadScheme.java| 180 -
 .../org/apache/hadoop/fs/viewfs/package-info.java  |  26 +++
 .../fs/viewfs/TestHCFSMountTableConfigLoader.java  | 165 +++
 ...iewFSOverloadSchemeCentralMountTableConfig.java |  77 +
 ...iewFileSystemOverloadSchemeLocalFileSystem.java |  47 --
 .../apache/hadoop/fs/viewfs/ViewFsTestSetup.java   |  71 +++-
 ...FSOverloadSchemeWithMountTableConfigInHDFS.java |  68 
 ...ViewFileSystemOverloadSchemeWithHdfsScheme.java | 125 +-
 11 files changed, 797 insertions(+), 133 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
index 37f1a16..0a5d4b4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/Constants.java
@@ -30,6 +30,11 @@ public interface Constants {
* Prefix for the config variable prefix for the ViewFs mount-table
*/
   public static final String CONFIG_VIEWFS_PREFIX = "fs.viewfs.mounttable";
+
+  /**
+   * Prefix for the config variable for the ViewFs mount-table path.
+   */
+  String CONFIG_VIEWFS_MOUNTTABLE_PATH = CONFIG_VIEWFS_PREFIX + ".path";
  
   /**
* Prefix for the home dir for the mount table - if not specified
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
new file mode 100644
index 000..c7e5aab
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation for Apache Hadoop compatible file system based mount-table
+ * file loading.
+ */
+public class HCFSMountTableConfigLoader implements MountTableConfigLoader {
+  private static final String REGEX_DOT = "[.]";
+  private static final Logger LOGGER =
+  LoggerFactory.getLogger(HCFSMountTableConfigLoader.class);
+  private Path mountTable = null;
+
+  /**
+   * Loads the mount-table configuration from hadoop compatible file system and
+   * add the configuration items to given configuration. Mount-table
+   * configuration format should be suffixed with version number.
+   * Format: mount-table..xml
+   * Example: mount-table.1.xml
+   * When user wants to update mount-table, the expectation is to upload new
+   * mount-table configuration file with monotonically increasing integer as
+   * version number. This API loads the highest version number file. We can
+   * also 

[hadoop] branch branch-3.3 updated: HDFS-15372. Files in snapshots no longer see attribute provider permissions. Contributed by Stephen O'Donnell.

2020-06-16 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 0b9e5ea  HDFS-15372. Files in snapshots no longer see attribute 
provider permissions. Contributed by Stephen O'Donnell.
0b9e5ea is described below

commit 0b9e5ea592b66e1b370feaae9677a7b99fdbd03c
Author: Stephen O'Donnell 
AuthorDate: Tue Jun 16 15:58:16 2020 -0700

HDFS-15372. Files in snapshots no longer see attribute provider 
permissions. Contributed by Stephen O'Donnell.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 730a39d1388548f22f76132a6734d61c24c3eb72)
---
 .../hadoop/hdfs/server/namenode/FSDirectory.java   |  16 ++-
 .../hdfs/server/namenode/FSPermissionChecker.java  |  46 +
 .../hadoop/hdfs/server/namenode/INodesInPath.java  |  42 
 .../namenode/TestINodeAttributeProvider.java   | 115 +
 4 files changed, 199 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 7eae564..34ee959 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -73,7 +73,6 @@ import javax.annotation.Nullable;
 import java.io.Closeable;
 import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.lang.reflect.Method;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -2022,7 +2021,20 @@ public class FSDirectory implements Closeable {
   // first empty component for the root.  however file status
   // related calls are expected to strip out the root component according
   // to TestINodeAttributeProvider.
-  byte[][] components = iip.getPathComponents();
+  // Due to HDFS-15372 the attribute provider should received the resolved
+  // snapshot path. Ie, rather than seeing /d/.snapshot/sn/data it should
+  // see /d/data. However, for the path /d/.snapshot/sn it should see this
+  // full path. Node.getPathComponents always resolves the path to the
+  // original location, so we need to check if ".snapshot/sn" is the last
+  // path to ensure the provider receives the correct components.
+  byte[][] components;
+  if (iip.isSnapshot() && !iip.isDotSnapshotDirPrefix()) {
+// For snapshot paths, node.getPathComponents unless the last component
+// is like ".snapshot/sn"
+components = node.getPathComponents();
+  } else {
+components = iip.getPathComponents();
+  }
   components = Arrays.copyOfRange(components, 1, components.length);
   nodeAttrs = ap.getAttributes(components, nodeAttrs);
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
index c697ead7..615b164 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
@@ -19,11 +19,14 @@ package org.apache.hadoop.hdfs.server.namenode;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Stack;
 
 import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.ipc.CallerContext;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -207,7 +210,7 @@ public class FSPermissionChecker implements 
AccessControlEnforcer {
 final INodeAttributes[] inodeAttrs = new INodeAttributes[inodes.length];
 final byte[][] components = inodesInPath.getPathComponents();
 for (int i = 0; i < inodes.length && inodes[i] != null; i++) {
-  inodeAttrs[i] = getINodeAttrs(components, i, inodes[i], snapshotId);
+  inodeAttrs[i] = getINodeAttrs(inodes[i], snapshotId);
 }
 
 String path = inodesInPath.getPath();
@@ -257,8 +260,7 @@ public class FSPermissionChecker implements 
AccessControlEnforcer {
   void checkPermission(INode inode, int snapshotId, FsAction access)
   throws AccessControlException {
 byte[][] pathComponents = inode.getPathComponents();
-INodeAttributes nodeAttributes = getINodeAttrs(pathComponents,
-pathComponents.length - 1, inode, snapshotId);
+INodeAttributes nodeAttributes = getINodeAttrs(inode, 

[hadoop] branch trunk updated: HDFS-15372. Files in snapshots no longer see attribute provider permissions. Contributed by Stephen O'Donnell.

2020-06-16 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 730a39d  HDFS-15372. Files in snapshots no longer see attribute 
provider permissions. Contributed by Stephen O'Donnell.
730a39d is described below

commit 730a39d1388548f22f76132a6734d61c24c3eb72
Author: Stephen O'Donnell 
AuthorDate: Tue Jun 16 15:58:16 2020 -0700

HDFS-15372. Files in snapshots no longer see attribute provider 
permissions. Contributed by Stephen O'Donnell.

Signed-off-by: Wei-Chiu Chuang 
---
 .../hadoop/hdfs/server/namenode/FSDirectory.java   |  16 ++-
 .../hdfs/server/namenode/FSPermissionChecker.java  |  46 +
 .../hadoop/hdfs/server/namenode/INodesInPath.java  |  42 
 .../namenode/TestINodeAttributeProvider.java   | 115 +
 4 files changed, 199 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 5895c6b..cd9eb09 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -73,7 +73,6 @@ import javax.annotation.Nullable;
 import java.io.Closeable;
 import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.lang.reflect.Method;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -2032,7 +2031,20 @@ public class FSDirectory implements Closeable {
   // first empty component for the root.  however file status
   // related calls are expected to strip out the root component according
   // to TestINodeAttributeProvider.
-  byte[][] components = iip.getPathComponents();
+  // Due to HDFS-15372 the attribute provider should received the resolved
+  // snapshot path. Ie, rather than seeing /d/.snapshot/sn/data it should
+  // see /d/data. However, for the path /d/.snapshot/sn it should see this
+  // full path. Node.getPathComponents always resolves the path to the
+  // original location, so we need to check if ".snapshot/sn" is the last
+  // path to ensure the provider receives the correct components.
+  byte[][] components;
+  if (iip.isSnapshot() && !iip.isDotSnapshotDirPrefix()) {
+// For snapshot paths, node.getPathComponents unless the last component
+// is like ".snapshot/sn"
+components = node.getPathComponents();
+  } else {
+components = iip.getPathComponents();
+  }
   components = Arrays.copyOfRange(components, 1, components.length);
   nodeAttrs = ap.getAttributes(components, nodeAttrs);
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
index c697ead7..615b164 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
@@ -19,11 +19,14 @@ package org.apache.hadoop.hdfs.server.namenode;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collection;
 import java.util.List;
 import java.util.Stack;
 
 import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.ipc.CallerContext;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -207,7 +210,7 @@ public class FSPermissionChecker implements 
AccessControlEnforcer {
 final INodeAttributes[] inodeAttrs = new INodeAttributes[inodes.length];
 final byte[][] components = inodesInPath.getPathComponents();
 for (int i = 0; i < inodes.length && inodes[i] != null; i++) {
-  inodeAttrs[i] = getINodeAttrs(components, i, inodes[i], snapshotId);
+  inodeAttrs[i] = getINodeAttrs(inodes[i], snapshotId);
 }
 
 String path = inodesInPath.getPath();
@@ -257,8 +260,7 @@ public class FSPermissionChecker implements 
AccessControlEnforcer {
   void checkPermission(INode inode, int snapshotId, FsAction access)
   throws AccessControlException {
 byte[][] pathComponents = inode.getPathComponents();
-INodeAttributes nodeAttributes = getINodeAttrs(pathComponents,
-pathComponents.length - 1, inode, snapshotId);
+INodeAttributes nodeAttributes = getINodeAttrs(inode, snapshotId);
 try {
   INodeAttributes[] iNodeAttr = {nodeAttributes};
   

[hadoop] branch branch-3.3 updated: YARN-10274. Merge QueueMapping and QueueMappingEntity. Contributed by Gergely Pollak

2020-06-16 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 8be302a  YARN-10274. Merge QueueMapping and QueueMappingEntity. 
Contributed by Gergely Pollak
8be302a is described below

commit 8be302a3b8407921753978621d59f6c3eb53f38b
Author: Szilard Nemeth 
AuthorDate: Tue Jun 16 18:25:47 2020 +0200

YARN-10274. Merge QueueMapping and QueueMappingEntity. Contributed by 
Gergely Pollak
---
 .../placement/AppNameMappingPlacementRule.java | 18 ++--
 .../resourcemanager/placement/QueueMapping.java| 15 +++-
 .../placement/QueueMappingEntity.java  | 98 --
 .../placement/QueuePlacementRuleUtils.java | 23 ++---
 .../capacity/CapacitySchedulerConfiguration.java   | 22 ++---
 .../placement/TestAppNameMappingPlacementRule.java | 22 +++--
 .../placement/TestPlacementManager.java|  8 +-
 .../TestCapacitySchedulerQueueMappingFactory.java  | 14 ++--
 8 files changed, 74 insertions(+), 146 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
index c8a29b4..cf725b6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
@@ -48,7 +48,7 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
   private static final String QUEUE_MAPPING_NAME = "app-name";
 
   private boolean overrideWithQueueMappings = false;
-  private List mappings = null;
+  private List mappings = null;
   protected CapacitySchedulerQueueManager queueManager;
 
   public AppNameMappingPlacementRule() {
@@ -56,7 +56,7 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
   }
 
   public AppNameMappingPlacementRule(boolean overrideWithQueueMappings,
-  List newMappings) {
+  List newMappings) {
 this.overrideWithQueueMappings = overrideWithQueueMappings;
 this.mappings = newMappings;
   }
@@ -76,16 +76,16 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
 LOG.info(
 "Initialized App Name queue mappings, override: " + 
overrideWithQueueMappings);
 
-List queueMappings =
+List queueMappings =
 conf.getQueueMappingEntity(QUEUE_MAPPING_NAME);
 
 // Get new user mappings
-List newMappings = new ArrayList<>();
+List newMappings = new ArrayList<>();
 
 queueManager = schedulerContext.getCapacitySchedulerQueueManager();
 
 // check if mappings refer to valid queues
-for (QueueMappingEntity mapping : queueMappings) {
+for (QueueMapping mapping : queueMappings) {
   QueuePath queuePath = mapping.getQueuePath();
 
   if (isStaticQueueMapping(mapping)) {
@@ -109,7 +109,7 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
   //validate if parent queue is specified,
   // then it should exist and
   // be an instance of AutoCreateEnabledParentQueue
-  QueueMappingEntity newMapping =
+  QueueMapping newMapping =
   validateAndGetAutoCreatedQueueMapping(queueManager, mapping,
   queuePath);
   if (newMapping == null) {
@@ -123,7 +123,7 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
   //   if its an instance of leaf queue
   //   if its an instance of auto created leaf queue,
   // then extract parent queue name and update queue mapping
-  QueueMappingEntity newMapping = validateAndGetQueueMapping(
+  QueueMapping newMapping = validateAndGetQueueMapping(
   queueManager, queue, mapping, queuePath);
   newMappings.add(newMapping);
 }
@@ -134,7 +134,7 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
 // if parent queue is specified, then
 //  parent queue exists and an instance of AutoCreateEnabledParentQueue
 //
-QueueMappingEntity newMapping = validateAndGetAutoCreatedQueueMapping(
+QueueMapping newMapping = validateAndGetAutoCreatedQueueMapping(
 queueManager, mapping, queuePath);
 if (newMapping != null) {
   newMappings.add(newMapping);
@@ -160,7 +160,7 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
 
   private 

[hadoop] branch branch-3.3 updated: YARN-10292. FS-CS converter: add an option to enable asynchronous scheduling in CapacityScheduler. Contributed by Benjamin Teke

2020-06-16 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 52efe48  YARN-10292. FS-CS converter: add an option to enable 
asynchronous scheduling in CapacityScheduler. Contributed by Benjamin Teke
52efe48 is described below

commit 52efe48d79cd0175002ea1fc140fde72fcef5a6c
Author: Szilard Nemeth 
AuthorDate: Tue Jun 16 18:01:39 2020 +0200

YARN-10292. FS-CS converter: add an option to enable asynchronous 
scheduling in CapacityScheduler. Contributed by Benjamin Teke
---
 .../fair/converter/ConversionOptions.java  |  9 +
 .../FSConfigToCSConfigArgumentHandler.java |  5 +++
 .../converter/FSConfigToCSConfigConverter.java |  3 +-
 .../fair/converter/FSYarnSiteConverter.java|  6 ++-
 .../TestFSConfigToCSConfigArgumentHandler.java | 31 +++
 .../converter/TestFSConfigToCSConfigConverter.java | 35 +
 .../fair/converter/TestFSYarnSiteConverter.java| 44 ++
 7 files changed, 123 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/ConversionOptions.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/ConversionOptions.java
index 7fec0a8..aae1d55 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/ConversionOptions.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/ConversionOptions.java
@@ -22,6 +22,7 @@ public class ConversionOptions {
   private DryRunResultHolder dryRunResultHolder;
   private boolean dryRun;
   private boolean noTerminalRuleCheck;
+  private boolean enableAsyncScheduler;
 
   public ConversionOptions(DryRunResultHolder dryRunResultHolder,
   boolean dryRun) {
@@ -41,6 +42,14 @@ public class ConversionOptions {
 return noTerminalRuleCheck;
   }
 
+  public void setEnableAsyncScheduler(boolean enableAsyncScheduler) {
+this.enableAsyncScheduler = enableAsyncScheduler;
+  }
+
+  public boolean isEnableAsyncScheduler() {
+return enableAsyncScheduler;
+  }
+
   public void handleWarning(String msg, Logger log) {
 if (dryRun) {
   dryRunResultHolder.addDryRunWarning(msg);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigArgumentHandler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigArgumentHandler.java
index 5bd3b1a..c2554a4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigArgumentHandler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigArgumentHandler.java
@@ -109,6 +109,9 @@ public class FSConfigToCSConfigArgumentHandler {
 SKIP_VERIFICATION("skip verification", "s",
 "skip-verification",
 "Skips the verification of the converted configuration", false),
+ENABLE_ASYNC_SCHEDULER("enable asynchronous scheduler", "a", 
"enable-async-scheduler",
+  "Enables the Asynchronous scheduler which decouples the 
CapacityScheduler" +
+" scheduling from Node Heartbeats.", false),
 HELP("help", "h", "help", "Displays the list of options", false);
 
 private final String name;
@@ -220,6 +223,8 @@ public class FSConfigToCSConfigArgumentHandler {
 conversionOptions.setDryRun(dryRun);
 conversionOptions.setNoTerminalRuleCheck(
 cliParser.hasOption(CliOption.NO_TERMINAL_RULE_CHECK.shortSwitch));
+conversionOptions.setEnableAsyncScheduler(
+  cliParser.hasOption(CliOption.ENABLE_ASYNC_SCHEDULER.shortSwitch));
 
 checkOptionPresent(cliParser, CliOption.YARN_SITE);
 checkOutputDefined(cliParser, dryRun);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigConverter.java