[hadoop] branch branch-3.1 updated: HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle CVE-2019-20444, CVE-2019-16869

2020-03-09 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new e829631  HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869
e829631 is described below

commit e829631eed3c94006ed7826ddef4f09d3d7591f9
Author: Brahma Reddy Battula 
AuthorDate: Mon Mar 9 19:21:58 2020 +0530

HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869

(cherry picked from commit c6b8a3038646697b77f6db54a2ef6266a9fc7888)
(cherry picked from commit 74fa55afc3d333d7d99397754f45b3fc54bc0f4a)

 Conflicts:
hadoop-project/pom.xml
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 0983579..a711873 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -787,7 +787,7 @@
   
 io.netty
 netty-all
-4.1.42.Final
+4.1.45.Final
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle CVE-2019-20444, CVE-2019-16869

2020-03-09 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 74fa55a  HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869
74fa55a is described below

commit 74fa55afc3d333d7d99397754f45b3fc54bc0f4a
Author: Brahma Reddy Battula 
AuthorDate: Mon Mar 9 19:21:58 2020 +0530

HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869

(cherry picked from commit c6b8a3038646697b77f6db54a2ef6266a9fc7888)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 1faa123..548ffb3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -133,7 +133,7 @@
 4.1.0-incubating
 3.2.4
 3.10.6.Final
-4.1.42.Final
+4.1.45.Final
 
 
 0.5.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-16643. Update netty4 to the latest 4.1.42. Contributed by Lisheng Sun.

2020-03-09 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 307abb7  HADOOP-16643. Update netty4 to the latest 4.1.42. Contributed 
by Lisheng Sun.
307abb7 is described below

commit 307abb7ce85534a79b6054256fa945cb5f47f7b7
Author: Wei-Chiu Chuang 
AuthorDate: Tue Oct 15 08:55:14 2019 -0700

HADOOP-16643. Update netty4 to the latest 4.1.42. Contributed by Lisheng 
Sun.

(cherry picked from commit 85af77c75768416db24ca506fd1704ce664ca92f)

 Conflicts:
hadoop-project/pom.xml
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4ce69a2..1faa123 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -133,7 +133,7 @@
 4.1.0-incubating
 3.2.4
 3.10.6.Final
-4.0.52.Final
+4.1.42.Final
 
 
 0.5.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-16643. Update netty4 to the latest 4.1.42. Contributed by Lisheng Sun.

2020-03-09 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 7bb5bf0  HADOOP-16643. Update netty4 to the latest 4.1.42. Contributed 
by Lisheng Sun.
7bb5bf0 is described below

commit 7bb5bf0e4ef8126921a06db8462af2b029372c80
Author: Wei-Chiu Chuang 
AuthorDate: Tue Oct 15 08:55:14 2019 -0700

HADOOP-16643. Update netty4 to the latest 4.1.42. Contributed by Lisheng 
Sun.

(cherry picked from commit 85af77c75768416db24ca506fd1704ce664ca92f)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index c1994a3..0983579 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -787,7 +787,7 @@
   
 io.netty
 netty-all
-4.0.52.Final
+4.1.42.Final
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: HADOOP-16840. AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket. Contributed by wujinhu.

2020-03-09 Thread wwei
This is an automated email from the ASF dual-hosted git repository.

wwei pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new a55f96c  HADOOP-16840. AliyunOSS: getFileStatus throws 
FileNotFoundException in versioning bucket. Contributed by wujinhu.
a55f96c is described below

commit a55f96c053468f396812fd3c7eb006069c85f30b
Author: Weiwei Yang 
AuthorDate: Sun Mar 8 21:01:34 2020 -0700

HADOOP-16840. AliyunOSS: getFileStatus throws FileNotFoundException in 
versioning bucket. Contributed by wujinhu.

(cherry picked from commit 6dfe00c71eb3721e9be3fc42349a81c4b013ada1)
---
 .../hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java  | 18 +++--
 .../oss/TestAliyunOSSFileSystemContract.java   | 23 ++
 2 files changed, 35 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index ad359c6..3f87289 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -273,12 +273,18 @@ public class AliyunOSSFileSystem extends FileSystem {
 }
 if (meta == null) {
   ObjectListing listing = store.listObjects(key, 1, null, false);
-  if (CollectionUtils.isNotEmpty(listing.getObjectSummaries()) ||
-  CollectionUtils.isNotEmpty(listing.getCommonPrefixes())) {
-return new OSSFileStatus(0, true, 1, 0, 0, qualifiedPath, username);
-  } else {
-throw new FileNotFoundException(path + ": No such file or directory!");
-  }
+  do {
+if (CollectionUtils.isNotEmpty(listing.getObjectSummaries()) ||
+CollectionUtils.isNotEmpty(listing.getCommonPrefixes())) {
+  return new OSSFileStatus(0, true, 1, 0, 0, qualifiedPath, username);
+} else if (listing.isTruncated()) {
+  listing = store.listObjects(key, 1000, listing.getNextMarker(),
+  false);
+} else {
+  throw new FileNotFoundException(
+  path + ": No such file or directory!");
+}
+  } while (true);
 } else if (objectRepresentsDirectory(key, meta.getContentLength())) {
   return new OSSFileStatus(0, true, 1, 0, meta.getLastModified().getTime(),
   qualifiedPath, username);
diff --git 
a/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
 
b/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
index a83c6da..e6467f4 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
@@ -31,6 +31,7 @@ import org.junit.Test;
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.util.Arrays;
 
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -97,6 +98,28 @@ public class TestAliyunOSSFileSystemContract
   }
 
   @Test
+  public void testGetFileStatusInVersioningBucket() throws Exception {
+Path file = this.path("/test/hadoop/file");
+for (int i = 1; i <= 30; ++i) {
+  this.createFile(new Path(file, "sub" + i));
+}
+assertTrue("File exists", this.fs.exists(file));
+FileStatus fs = this.fs.getFileStatus(file);
+assertEquals(fs.getOwner(),
+UserGroupInformation.getCurrentUser().getShortUserName());
+assertEquals(fs.getGroup(),
+UserGroupInformation.getCurrentUser().getShortUserName());
+
+AliyunOSSFileSystemStore store = ((AliyunOSSFileSystem)this.fs).getStore();
+for (int i = 0; i < 29; ++i) {
+  store.deleteObjects(Arrays.asList("test/hadoop/file/sub" + i));
+}
+
+// HADOOP-16840, will throw FileNotFoundException without this fix
+this.fs.getFileStatus(file);
+  }
+
+  @Test
   public void testDeleteSubdir() throws IOException {
 Path parentDir = this.path("/test/hadoop");
 Path file = this.path("/test/hadoop/file");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HADOOP-16840. AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket. Contributed by wujinhu.

2020-03-09 Thread wwei
This is an automated email from the ASF dual-hosted git repository.

wwei pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 70fd950  HADOOP-16840. AliyunOSS: getFileStatus throws 
FileNotFoundException in versioning bucket. Contributed by wujinhu.
70fd950 is described below

commit 70fd9501aecde4f51bed739b4549dc163d1b66f4
Author: Weiwei Yang 
AuthorDate: Sun Mar 8 21:01:34 2020 -0700

HADOOP-16840. AliyunOSS: getFileStatus throws FileNotFoundException in 
versioning bucket. Contributed by wujinhu.

(cherry picked from commit 6dfe00c71eb3721e9be3fc42349a81c4b013ada1)
---
 .../hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java  | 18 +++--
 .../oss/TestAliyunOSSFileSystemContract.java   | 23 ++
 2 files changed, 35 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 414eafa..02088c8 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -273,12 +273,18 @@ public class AliyunOSSFileSystem extends FileSystem {
 }
 if (meta == null) {
   ObjectListing listing = store.listObjects(key, 1, null, false);
-  if (CollectionUtils.isNotEmpty(listing.getObjectSummaries()) ||
-  CollectionUtils.isNotEmpty(listing.getCommonPrefixes())) {
-return new OSSFileStatus(0, true, 1, 0, 0, qualifiedPath, username);
-  } else {
-throw new FileNotFoundException(path + ": No such file or directory!");
-  }
+  do {
+if (CollectionUtils.isNotEmpty(listing.getObjectSummaries()) ||
+CollectionUtils.isNotEmpty(listing.getCommonPrefixes())) {
+  return new OSSFileStatus(0, true, 1, 0, 0, qualifiedPath, username);
+} else if (listing.isTruncated()) {
+  listing = store.listObjects(key, 1000, listing.getNextMarker(),
+  false);
+} else {
+  throw new FileNotFoundException(
+  path + ": No such file or directory!");
+}
+  } while (true);
 } else if (objectRepresentsDirectory(key, meta.getContentLength())) {
   return new OSSFileStatus(0, true, 1, 0, meta.getLastModified().getTime(),
   qualifiedPath, username);
diff --git 
a/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
 
b/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
index a83c6da..e6467f4 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
@@ -31,6 +31,7 @@ import org.junit.Test;
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.util.Arrays;
 
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -97,6 +98,28 @@ public class TestAliyunOSSFileSystemContract
   }
 
   @Test
+  public void testGetFileStatusInVersioningBucket() throws Exception {
+Path file = this.path("/test/hadoop/file");
+for (int i = 1; i <= 30; ++i) {
+  this.createFile(new Path(file, "sub" + i));
+}
+assertTrue("File exists", this.fs.exists(file));
+FileStatus fs = this.fs.getFileStatus(file);
+assertEquals(fs.getOwner(),
+UserGroupInformation.getCurrentUser().getShortUserName());
+assertEquals(fs.getGroup(),
+UserGroupInformation.getCurrentUser().getShortUserName());
+
+AliyunOSSFileSystemStore store = ((AliyunOSSFileSystem)this.fs).getStore();
+for (int i = 0; i < 29; ++i) {
+  store.deleteObjects(Arrays.asList("test/hadoop/file/sub" + i));
+}
+
+// HADOOP-16840, will throw FileNotFoundException without this fix
+this.fs.getFileStatus(file);
+  }
+
+  @Test
   public void testDeleteSubdir() throws IOException {
 Path parentDir = this.path("/test/hadoop");
 Path file = this.path("/test/hadoop/file");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-9419. Log a warning if GPU isolation is enabled but LinuxContainerExecutor is disabled. Contribued by Andras Gyori

2020-03-09 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 44afe11  YARN-9419. Log a warning if GPU isolation is enabled but 
LinuxContainerExecutor is disabled. Contribued by Andras Gyori
44afe11 is described below

commit 44afe1154dd8ce937470c04a126310989f3dc2cb
Author: Szilard Nemeth 
AuthorDate: Mon Mar 9 16:08:24 2020 +0100

YARN-9419. Log a warning if GPU isolation is enabled but 
LinuxContainerExecutor is disabled. Contribued by Andras Gyori
---
 .../resourceplugin/gpu/GpuResourcePlugin.java | 15 +++
 1 file changed, 15 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuResourcePlugin.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuResourcePlugin.java
index 25ea193..233dd0d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuResourcePlugin.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuResourcePlugin.java
@@ -20,9 +20,12 @@ package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugi
 
 import java.util.List;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor;
 import org.apache.hadoop.yarn.server.nodemanager.Context;
+import org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandler;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandler;
@@ -59,6 +62,7 @@ public class GpuResourcePlugin implements ResourcePlugin {
 
   @Override
   public void initialize(Context context) throws YarnException {
+validateExecutorConfig(context.getConf());
 this.gpuDiscoverer.initialize(context.getConf(),
 new NvidiaBinaryHelper());
 this.dockerCommandPlugin =
@@ -66,6 +70,17 @@ public class GpuResourcePlugin implements ResourcePlugin {
 context.getConf());
   }
 
+  private void validateExecutorConfig(Configuration conf) {
+Class executorClass = conf.getClass(
+YarnConfiguration.NM_CONTAINER_EXECUTOR, 
DefaultContainerExecutor.class,
+ContainerExecutor.class);
+
+if (executorClass.equals(DefaultContainerExecutor.class)) {
+  LOG.warn("Using GPU plugin with disabled LinuxContainerExecutor" +
+  " is considered to be unsafe.");
+}
+  }
+
   @Override
   public ResourceHandler createResourceHandler(
   Context context, CGroupsHandler cGroupsHandler,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16898. Batch listing of multiple directories via an (unstable) interface

2020-03-09 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c734d69  HADOOP-16898. Batch listing of multiple directories via an 
(unstable) interface
c734d69 is described below

commit c734d69a55693143d0aba2f7f5a793b11c8c50a5
Author: Steve Loughran 
AuthorDate: Mon Mar 9 14:50:47 2020 +

HADOOP-16898. Batch listing of multiple directories via an (unstable) 
interface

Contributed by Steve Loughran.

This moves the new API of HDFS-13616 into a interface which is implemented 
by
HDFS RPC filesystem client (not WebHDFS or any other connector)

This new interface, BatchListingOperations, is in hadoop-common,
so applications do not need to be compiled with HDFS on the classpath.
They must cast the FS into the interface.

instanceof can probe the client for having the new interface -the patch
also adds a new path capability to probe for this.

The FileSystem implementation is cut; tests updated as appropriate.

All new interfaces/classes/constants are marked as @unstable.

Change-Id: I5623c51f2c75804f58f915dd7e60cb2cffdac681
---
 .../apache/hadoop/fs/BatchListingOperations.java   | 61 ++
 .../apache/hadoop/fs/CommonPathCapabilities.java   |  8 +++
 .../main/java/org/apache/hadoop/fs/FileSystem.java | 27 --
 .../java/org/apache/hadoop/fs/PartialListing.java  |  2 +-
 .../org/apache/hadoop/fs/TestFilterFileSystem.java |  5 --
 .../org/apache/hadoop/fs/TestHarFileSystem.java|  4 --
 .../apache/hadoop/hdfs/DistributedFileSystem.java  | 15 +-
 .../hadoop/hdfs/TestBatchedListDirectories.java| 11 +++-
 8 files changed, 94 insertions(+), 39 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BatchListingOperations.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BatchListingOperations.java
new file mode 100644
index 000..f72b1e2
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BatchListingOperations.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Interface filesystems MAY implement to offer a batched list.
+ * If implemented, filesystems SHOULD declare
+ * {@link CommonPathCapabilities#FS_EXPERIMENTAL_BATCH_LISTING} to be a 
supported
+ * path capability.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public interface BatchListingOperations {
+
+  /**
+   * Batched listing API that returns {@link PartialListing}s for the
+   * passed Paths.
+   *
+   * @param paths List of paths to list.
+   * @return RemoteIterator that returns corresponding PartialListings.
+   * @throws IOException failure
+   */
+  RemoteIterator> batchedListStatusIterator(
+  List paths) throws IOException;
+
+  /**
+   * Batched listing API that returns {@link PartialListing}s for the passed
+   * Paths. The PartialListing will contain {@link LocatedFileStatus} entries
+   * with locations.
+   *
+   * @param paths List of paths to list.
+   * @return RemoteIterator that returns corresponding PartialListings.
+   * @throws IOException failure
+   */
+  RemoteIterator>
+  batchedListLocatedStatusIterator(
+  List paths) throws IOException;
+
+}
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java
index 31e6bac..fb46ef8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.fs;
 
+import 

[hadoop] branch trunk updated: HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden

2020-03-09 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d4d4c37  HADOOP-14630 Contract Tests to verify create, mkdirs and 
rename under a file is forbidden
d4d4c37 is described below

commit d4d4c37810d92c927df91d78440c3ad73f46e8a0
Author: Steve Loughran 
AuthorDate: Mon Mar 9 14:43:47 2020 +

HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a 
file is forbidden

Contributed by Steve Loughran.

Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path 
matches
is a file significantly slows things down.

This check does take place in S3A mkdirs(), which walks backwards up the 
list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination 
directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.

Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
---
 .../src/site/markdown/filesystem/filesystem.md |  21 +++-
 .../fs/contract/AbstractContractCreateTest.java| 128 +++--
 .../fs/contract/AbstractContractRenameTest.java|  70 +--
 .../fs/contract/AbstractFSContractTestBase.java|   9 ++
 .../apache/hadoop/fs/contract/ContractOptions.java |   9 ++
 .../java/org/apache/hadoop/hdfs/DFSClient.java |   3 +-
 .../hadoop-aws/src/test/resources/contract/s3a.xml |   5 +
 .../fs/adl/live/TestAdlContractRenameLive.java |  15 +++
 .../TestNativeAzureFileSystemMetricsSystem.java|  35 --
 .../fs/swift/snative/SwiftNativeFileSystem.java|  39 +--
 .../swift/snative/SwiftNativeFileSystemStore.java  |   7 +-
 11 files changed, 295 insertions(+), 46 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
index 07a48f9..665e328 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
@@ -486,11 +486,11 @@ running out of memory as it calculates the partitions.
 
 Any FileSystem that does not actually break files into blocks SHOULD
 return a number for this that results in efficient processing.
-A FileSystem MAY make this user-configurable (the S3 and Swift filesystem 
clients do this).
+A FileSystem MAY make this user-configurable (the object store connectors 
usually do this).
 
 ###  `long getDefaultBlockSize(Path p)`
 
-Get the "default" block size for a path —that is, the block size to be used
+Get the "default" block size for a path --that is, the block size to be used
 when writing objects to a path in the filesystem.
 
  Preconditions
@@ -539,14 +539,21 @@ on the filesystem.
 
 ### `boolean mkdirs(Path p, FsPermission permission)`
 
-Create a directory and all its parents
+Create a directory and all its parents.
 
  Preconditions
 
 
+The path must either be a directory or not exist
+ 
  if exists(FS, p) and not isDir(FS, p) :
  raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]
 
+No ancestor may be a file
+
+forall d = ancestors(FS, p) : 
+if exists(FS, d) and not isDir(FS, d) :
+raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]
 
  Postconditions
 
@@ -586,6 +593,11 @@ Writing to or overwriting a directory must fail.
 
 if isDir(FS, p) : raise {FileAlreadyExistsException, 
FileNotFoundException, IOException}
 
+No ancestor may be a file
+
+forall d = ancestors(FS, p) : 
+if exists(FS, d) and not isDir(FS, d) :
+raise [ParentNotDirectoryException, FileAlreadyExistsException, 
IOException]
 
 FileSystems may reject the request for other
 reasons, such as the FS being read-only  (HDFS),
@@ -593,7 +605,8 @@ the block size being below the minimum permitted (HDFS),
 the replication count being out of range (HDFS),
 quotas on namespace or filesystem being exceeded, reserved
 names, etc. All rejections SHOULD be `IOException` or a subclass thereof
-and MAY be a `RuntimeException` or subclass. For instance, HDFS may raise a 
`InvalidPathException`.
+and MAY be a `RuntimeException` or subclass.
+For instance, HDFS may raise an `InvalidPathException`.
 
  Postconditions
 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractCreateTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractCreateTest.java
index 07c99e0..79222ce 100644
--- 

[hadoop] branch trunk updated (c6b8a30 -> 18050bc)

2020-03-09 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from c6b8a30  HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869
 add 18050bc  HADOOP-16909 Typo in distcp counters.

No new revisions were added by this update.

Summary of changes:
 .../src/test/java/org/apache/hadoop/io/file/tfile/TestTFile.java  | 8 
 .../org/apache/hadoop/tools/mapred/CopyMapper_Counter.properties  | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle CVE-2019-20444, CVE-2019-16869

2020-03-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c6b8a30  HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869
c6b8a30 is described below

commit c6b8a3038646697b77f6db54a2ef6266a9fc7888
Author: Brahma Reddy Battula 
AuthorDate: Mon Mar 9 19:21:58 2020 +0530

HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 8b07213..77811e3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -140,7 +140,7 @@
 4.1.0-incubating
 3.2.4
 3.10.6.Final
-4.1.42.Final
+4.1.45.Final
 
 
 0.5.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org