[hadoop] 02/05: HADOOP-16451. Update jackson-databind to 2.9.9.1. Contributed by Siyao Meng.

2019-10-10 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 146908cf3d63876b2c62a7522a911fb2b28add3d
Author: Siyao Meng 
AuthorDate: Wed Jul 24 17:24:54 2019 -0700

HADOOP-16451. Update jackson-databind to 2.9.9.1. Contributed by Siyao Meng.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 9b8b3acb0a2b87356056c23f3d0f30a97a38cd3d)

 Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml
---
 hadoop-project/pom.xml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 0c917b6..f2c669b 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -70,6 +70,7 @@
 
 1.9.13
 2.9.9
+2.9.9.1
 
 
 4.5.6
@@ -1034,7 +1035,7 @@
   
 com.fasterxml.jackson.core
 jackson-databind
-${jackson2.version}
+${jackson2.databind.version}
   
   
 com.fasterxml.jackson.core


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/05: HADOOP-16365. Upgrade jackson-databind to 2.9.9. Contributed by Shweta Yakkali.

2019-10-10 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 43576f4738401315fdb3bc665c609701ae80e965
Author: Shweta Yakkali 
AuthorDate: Wed Jun 12 10:36:34 2019 -0700

HADOOP-16365. Upgrade jackson-databind to 2.9.9. Contributed by Shweta 
Yakkali.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit cf84881dea11639bed48b4c8e8a785a535510e6d)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index f68507a..0c917b6 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -69,7 +69,7 @@
 
 
 1.9.13
-2.9.8
+2.9.9
 
 
 4.5.6


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-2266. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path. (#1633)

2019-10-10 Thread shashikant
This is an automated email from the ASF dual-hosted git repository.

shashikant pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a031388  HDDS-2266. Avoid evaluation of LOG.trace and LOG.debug 
statement in the read/write path. (#1633)
a031388 is described below

commit a031388a2e8b7ac60ebca5a08216e2dd19ea6933
Author: Siddharth 
AuthorDate: Thu Oct 10 03:00:11 2019 -0700

HDDS-2266. Avoid evaluation of LOG.trace and LOG.debug statement in the 
read/write path. (#1633)
---
 .../apache/hadoop/hdds/scm/pipeline/Pipeline.java  |  3 +-
 .../client/io/BlockOutputStreamEntryPool.java  | 10 ++--
 .../hadoop/ozone/client/io/KeyInputStream.java |  6 ++-
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  | 10 ++--
 .../hadoop/ozone/om/S3SecretManagerImpl.java   |  4 +-
 .../ozone/om/ha/OMFailoverProxyProvider.java   |  6 ++-
 .../hadoop/ozone/om/helpers/OMRatisHelper.java |  4 +-
 .../hadoop/ozone/om/lock/OzoneManagerLock.java | 24 ++
 .../security/OzoneBlockTokenSecretManager.java |  2 +-
 .../OzoneDelegationTokenSecretManager.java |  6 ++-
 .../security/OzoneDelegationTokenSelector.java |  8 +++-
 .../hadoop/ozone/security/OzoneSecretManager.java  |  6 ++-
 .../apache/hadoop/ozone/om/BucketManagerImpl.java  |  6 ++-
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java | 27 +++
 .../hadoop/ozone/om/OpenKeyCleanupService.java |  4 +-
 .../org/apache/hadoop/ozone/om/OzoneManager.java   | 10 ++--
 .../apache/hadoop/ozone/om/PrefixManagerImpl.java  | 11 +++--
 .../apache/hadoop/ozone/om/VolumeManagerImpl.java  | 16 +--
 .../ozone/om/ratis/OzoneManagerDoubleBuffer.java   |  8 ++--
 .../ozone/om/ratis/OzoneManagerRatisClient.java| 53 --
 .../ozone/om/ratis/OzoneManagerRatisServer.java|  6 ++-
 .../request/bucket/acl/OMBucketSetAclRequest.java  |  4 +-
 .../request/volume/acl/OMVolumeSetAclRequest.java  |  6 ++-
 .../OzoneManagerHARequestHandlerImpl.java  |  4 +-
 ...OzoneManagerProtocolServerSideTranslatorPB.java |  4 +-
 .../protocolPB/OzoneManagerRequestHandler.java |  4 +-
 .../ozone/security/acl/OzoneNativeAuthorizer.java  |  8 ++--
 .../hadoop/fs/ozone/BasicOzoneFileSystem.java  |  4 +-
 .../apache/hadoop/ozone/s3/AWSV4AuthParser.java| 10 ++--
 .../hadoop/ozone/s3/OzoneClientProducer.java   |  5 +-
 .../ozone/s3/exception/OS3ExceptionMapper.java |  4 +-
 31 files changed, 182 insertions(+), 101 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
index c62d9773..2828f6e 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
@@ -41,8 +41,7 @@ import java.util.stream.Collectors;
  */
 public final class Pipeline {
 
-  private static final Logger LOG = LoggerFactory
-  .getLogger(Pipeline.class);
+  private static final Logger LOG = LoggerFactory.getLogger(Pipeline.class);
   private final PipelineID id;
   private final ReplicationType type;
   private final ReplicationFactor factor;
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
index 045997f..b179ca5 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
@@ -193,10 +193,12 @@ public class BlockOutputStreamEntryPool {
 .setPipeline(streamEntry.getPipeline()).build();
 locationInfoList.add(info);
   }
-  LOG.debug(
-  "block written " + streamEntry.getBlockID() + ", length " + length
-  + " bcsID " + streamEntry.getBlockID()
-  .getBlockCommitSequenceId());
+  if (LOG.isDebugEnabled()) {
+LOG.debug(
+"block written " + streamEntry.getBlockID() + ", length " + length
++ " bcsID " + streamEntry.getBlockID()
+.getBlockCommitSequenceId());
+  }
 }
 return locationInfoList;
   }
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
index fa1672a..ecbb329 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
@@ -97,8 +97,10 @@ public class KeyInputStream extends InputStream implements 
Seekable {
 long keyLength 

[hadoop] 04/05: HADOOP-16533. Upgrade jackson-databind to 2.9.9.3. (#1354). Contributed by Akira Ajisaka.

2019-10-10 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 510b898c247c966f9db706f6e7af354cf7587837
Author: Akira Ajisaka 
AuthorDate: Wed Aug 28 07:54:35 2019 +0900

HADOOP-16533. Upgrade jackson-databind to 2.9.9.3. (#1354). Contributed by 
Akira Ajisaka.

Reviewed-by: Siyao Meng 
(cherry picked from commit d85d68f6ffb046dc801b263ce664fddb85d8d166)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 37a4d33..e969e92 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -70,7 +70,7 @@
 
 1.9.13
 2.9.9
-2.9.9.2
+2.9.9.3
 
 
 4.5.6


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated (eb4bd54 -> 793df59)

2019-10-10 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from eb4bd54  YARN-7243. Moving logging APIs over to slf4j in 
hadoop-yarn-server-resourcemanager. (#1634)
 new 43576f4  HADOOP-16365. Upgrade jackson-databind to 2.9.9. Contributed 
by Shweta Yakkali.
 new 146908c  HADOOP-16451. Update jackson-databind to 2.9.9.1. Contributed 
by Siyao Meng.
 new 320337d  HADOOP-16487. Update jackson-databind to 2.9.9.2. Contributed 
by Siyao Meng.
 new 510b898  HADOOP-16533. Upgrade jackson-databind to 2.9.9.3. (#1354). 
Contributed by Akira Ajisaka.
 new 793df59  HADOOP-16619. Upgrade jackson and jackson-databind to 2.9.10 
(#1554). Contributed by Siyao Meng.

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-project/pom.xml | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 05/05: HADOOP-16619. Upgrade jackson and jackson-databind to 2.9.10 (#1554). Contributed by Siyao Meng.

2019-10-10 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 793df595f25410566b554f78b66332b8d460e7ed
Author: Siyao Meng <50227127+smen...@users.noreply.github.com>
AuthorDate: Tue Oct 1 12:46:40 2019 -0700

HADOOP-16619. Upgrade jackson and jackson-databind to 2.9.10 (#1554). 
Contributed by Siyao Meng.

(cherry picked from commit d947ded05388c36f8ac688fc697cfafcbaaa58e7)
---
 hadoop-project/pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index e969e92..6e16f6c 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -69,8 +69,8 @@
 
 
 1.9.13
-2.9.9
-2.9.9.3
+2.9.10
+2.9.10
 
 
 4.5.6


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/05: HADOOP-16487. Update jackson-databind to 2.9.9.2. Contributed by Siyao Meng.

2019-10-10 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 320337d737f76e0f266e14d8b6f29790124049a6
Author: Siyao Meng 
AuthorDate: Sun Aug 4 20:02:39 2019 -0700

HADOOP-16487. Update jackson-databind to 2.9.9.2. Contributed by Siyao Meng.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 9680a8b2371c2ff8b5314e8fa39bdb9f8db46f96)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index f2c669b..37a4d33 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -70,7 +70,7 @@
 
 1.9.13
 2.9.9
-2.9.9.1
+2.9.9.2
 
 
 4.5.6


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14900. Fix build failure of hadoop-hdfs-native-client. Contributed by Masatake Iwasaki.

2019-10-10 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 104ccca  HDFS-14900. Fix build failure of hadoop-hdfs-native-client. 
Contributed by Masatake Iwasaki.
104ccca is described below

commit 104ccca916997bbf3c37d87adbae673f4dd42036
Author: Ayush Saxena 
AuthorDate: Thu Oct 10 21:36:02 2019 +0530

HDFS-14900. Fix build failure of hadoop-hdfs-native-client. Contributed by 
Masatake Iwasaki.
---
 BUILDING.txt  | 21 +
 dev-support/docker/Dockerfile | 17 +
 2 files changed, 38 insertions(+)

diff --git a/BUILDING.txt b/BUILDING.txt
index 03dffdd..d3c9a1a 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -6,6 +6,7 @@ Requirements:
 * Unix System
 * JDK 1.8
 * Maven 3.3 or later
+* Protocol Buffers 3.7.1 (if compiling native code)
 * CMake 3.1 or newer (if compiling native code)
 * Zlib devel (if compiling native code)
 * Cyrus SASL devel (if compiling native code)
@@ -61,6 +62,16 @@ Installing required packages for clean install of Ubuntu 
14.04 LTS Desktop:
   $ sudo apt-get -y install maven
 * Native libraries
   $ sudo apt-get -y install build-essential autoconf automake libtool cmake 
zlib1g-dev pkg-config libssl-dev libsasl2-dev
+* Protocol Buffers 3.7.1 (required to build native code)
+  $ mkdir -p /opt/protobuf-3.7-src \
+&& curl -L -s -S \
+  
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+  -o /opt/protobuf-3.7.1.tar.gz \
+&& tar xzf /opt/protobuf-3.7.1.tar.gz --strip-components 1 -C 
/opt/protobuf-3.7-src \
+&& cd /opt/protobuf-3.7-src \
+&& ./configure\
+&& make install \
+&& rm -rf /opt/protobuf-3.7-src
 
 Optional packages:
 
@@ -384,6 +395,15 @@ Installing required dependencies for clean install of 
macOS 10.14:
 * Install native libraries, only openssl is required to compile native code,
 you may optionally install zlib, lz4, etc.
   $ brew install openssl
+* Protocol Buffers 3.7.1 (required to compile native code)
+  $ wget 
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
+  $ mkdir -p protobuf-3.7 && tar zxvf protobuf-java-3.7.1.tar.gz 
--strip-components 1 -C protobuf-3.7
+  $ cd protobuf-3.7
+  $ ./configure
+  $ make
+  $ make check
+  $ make install
+  $ protoc --version
 
 Note that building Hadoop 3.1.1/3.1.2/3.2.0 native code from source is broken
 on macOS. For 3.1.1/3.1.2, you need to manually backport YARN-8622. For 3.2.0,
@@ -409,6 +429,7 @@ Requirements:
 * Windows System
 * JDK 1.8
 * Maven 3.0 or later
+* Protocol Buffers 3.7.1
 * CMake 3.1 or newer
 * Visual Studio 2010 Professional or Higher
 * Windows SDK 8.1 (if building CPU rate control for the container executor)
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 969d8bb..65cada2 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -106,6 +106,23 @@ ENV CMAKE_HOME /opt/cmake
 ENV PATH "${PATH}:/opt/cmake/bin"
 
 ##
+# Install Google Protobuf 3.7.1 (2.6.0 ships with Xenial)
+##
+# hadolint ignore=DL3003
+RUN mkdir -p /opt/protobuf-src \
+&& curl -L -s -S \
+  
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+  -o /opt/protobuf.tar.gz \
+&& tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src \
+&& cd /opt/protobuf-src \
+&& ./configure --prefix=/opt/protobuf \
+&& make install \
+&& cd /root \
+&& rm -rf /opt/protobuf-src
+ENV PROTOBUF_HOME /opt/protobuf
+ENV PATH "${PATH}:/opt/protobuf/bin"
+
+##
 # Install Apache Maven 3.3.9 (3.3.9 ships with Xenial)
 ##
 # hadolint ignore=DL3008


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HDFS-14162. [SBN read] Allow Balancer to work with Observer node. Add a new ProxyCombiner allowing for multiple related protocols to be combined. Allow AlignmentConte

2019-10-10 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 8337d1a  HDFS-14162. [SBN read] Allow Balancer to work with Observer 
node. Add a new ProxyCombiner allowing for multiple related protocols to be 
combined. Allow AlignmentContext to be passed in NameNodeProxyFactory. 
Contributed by Erik Krogen.
8337d1a is described below

commit 8337d1a4894b7c36802b903c0b6b858955c1faec
Author: Erik Krogen 
AuthorDate: Thu Oct 10 09:06:58 2019 -0700

HDFS-14162. [SBN read] Allow Balancer to work with Observer node. Add a new 
ProxyCombiner allowing for multiple related protocols to be combined. Allow 
AlignmentContext to be passed in NameNodeProxyFactory. Contributed by Erik 
Krogen.
---
 .../java/org/apache/hadoop/ipc/ProxyCombiner.java  | 137 +
 .../hdfs/server/namenode/ha/HAProxyFactory.java|   7 ++
 .../namenode/ha/ObserverReadProxyProvider.java |   2 +-
 .../ha/TestConfiguredFailoverProxyProvider.java|   6 +
 .../ha/TestRequestHedgingProxyProvider.java|   6 +
 .../org/apache/hadoop/hdfs/NameNodeProxies.java| 109 ++--
 .../hdfs/server/balancer/NameNodeConnector.java|  12 +-
 .../server/namenode/ha/NameNodeHAProxyFactory.java |   9 +-
 .../hdfs/server/protocol/BalancerProtocols.java|  30 +
 .../balancer/TestBalancerWithHANameNodes.java  | 101 ++-
 .../hadoop/hdfs/server/namenode/ha/HATestUtil.java |  12 +-
 11 files changed, 349 insertions(+), 82 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
new file mode 100644
index 000..fbafabc
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProxyCombiner.java
@@ -0,0 +1,137 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ipc;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.ipc.Client.ConnectionId;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * A utility class used to combine two protocol proxies.
+ * See {@link #combine(Class, Object...)}.
+ */
+public final class ProxyCombiner {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ProxyCombiner.class);
+
+  private ProxyCombiner() { }
+
+  /**
+   * Combine two or more proxies which together comprise a single proxy
+   * interface. This can be used for a protocol interface which {@code extends}
+   * multiple other protocol interfaces. The returned proxy will implement
+   * all of the methods of the combined proxy interface, delegating calls
+   * to which proxy implements that method. If multiple proxies implement the
+   * same method, the first in the list will be used for delegation.
+   *
+   * This will check that every method on the combined interface is
+   * implemented by at least one of the supplied proxy objects.
+   *
+   * @param combinedProxyInterface The interface of the combined proxy.
+   * @param proxies The proxies which should be used as delegates.
+   * @param  The type of the proxy that will be returned.
+   * @return The combined proxy.
+   */
+  @SuppressWarnings("unchecked")
+  public static  T combine(Class combinedProxyInterface,
+  Object... proxies) {
+methodLoop:
+for (Method m : combinedProxyInterface.getMethods()) {
+  for (Object proxy : proxies) {
+try {
+  proxy.getClass().getMethod(m.getName(), m.getParameterTypes());
+  continue methodLoop; // go to the next method
+} catch (NoSuchMethodException nsme) {
+  // Continue to try the next proxy
+}
+  }
+  throw new IllegalStateException("The proxies specified for "
+  + combinedProxyInterface + " do not cover method " + m);
+}
+
+

[hadoop] branch branch-2 updated: HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to work with non-ClientProtocol proxy types. Contributed by Erik Krogen.

2019-10-10 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 200c52f  HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to 
work with non-ClientProtocol proxy types. Contributed by Erik Krogen.
200c52f is described below

commit 200c52f78b3237e6aeb3bca66a3ac0afa00e03db
Author: Erik Krogen 
AuthorDate: Wed Apr 17 14:38:24 2019 -0700

HDFS-14245. [SBN read] Enable ObserverReadProxyProvider to work with 
non-ClientProtocol proxy types. Contributed by Erik Krogen.

(cherry picked from 5847e0014343f60f853cb796781ca1fa03a72efd)
(cherry picked from 6630c9b75d65deefb5550e355eef7783909a57bc)
(cherry picked from 9fdb849e034573bb44abd593eefa1e13a3261376)
---
 .../ha/AbstractNNFailoverProxyProvider.java|  3 +-
 .../namenode/ha/ObserverReadProxyProvider.java | 54 --
 .../namenode/ha/TestDelegationTokensWithHA.java|  2 +-
 .../hdfs/server/namenode/ha/TestObserverNode.java  | 12 +
 .../namenode/ha/TestObserverReadProxyProvider.java | 29 
 5 files changed, 83 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
index 572cb1c..646b100 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/AbstractNNFailoverProxyProvider.java
@@ -115,7 +115,8 @@ public abstract class AbstractNNFailoverProxyProvider 
implements
 /**
  * The currently known state of the NameNode represented by this ProxyInfo.
  * This may be out of date if the NameNode has changed state since the last
- * time the state was checked.
+ * time the state was checked. If the NameNode could not be contacted, this
+ * will store null to indicate an unknown state.
  */
 private HAServiceState cachedState;
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
index 0d9d3e7..c30623b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java
@@ -66,7 +66,7 @@ import com.google.common.annotations.VisibleForTesting;
  */
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
-public class ObserverReadProxyProvider
+public class ObserverReadProxyProvider
 extends AbstractNNFailoverProxyProvider {
   @VisibleForTesting
   static final Logger LOG = LoggerFactory.getLogger(
@@ -189,7 +189,13 @@ public class ObserverReadProxyProvider
 AUTO_MSYNC_PERIOD_DEFAULT, TimeUnit.MILLISECONDS);
 
 // TODO : make this configurable or remove this variable
-this.observerReadEnabled = true;
+if (wrappedProxy instanceof ClientProtocol) {
+  this.observerReadEnabled = true;
+} else {
+  LOG.info("Disabling observer reads for {} because the requested proxy "
+  + "class does not implement {}", uri, 
ClientProtocol.class.getName());
+  this.observerReadEnabled = false;
+}
   }
 
   public AlignmentContext getAlignmentContext() {
@@ -267,7 +273,7 @@ public class ObserverReadProxyProvider
   private HAServiceState getHAServiceState(NNProxyInfo proxyInfo) {
 IOException ioe;
 try {
-  return proxyInfo.proxy.getHAServiceState();
+  return getProxyAsClientProtocol(proxyInfo.proxy).getHAServiceState();
 } catch (RemoteException re) {
   // Though a Standby will allow a getHAServiceState call, it won't allow
   // delegation token lookup, so if DT is used it throws StandbyException
@@ -284,7 +290,19 @@ public class ObserverReadProxyProvider
   LOG.debug("Failed to connect to {} while fetching HAServiceState",
   proxyInfo.getAddress(), ioe);
 }
-return HAServiceState.STANDBY;
+return null;
+  }
+
+  /**
+   * Return the input proxy, cast as a {@link ClientProtocol}. This catches any
+   * {@link ClassCastException} and wraps it in a more helpful message. This
+   * should ONLY be called if the caller is certain that the proxy is, in fact,
+   * a {@link ClientProtocol}.
+   */
+  private ClientProtocol getProxyAsClientProtocol(T proxy) {
+assert proxy instanceof ClientProtocol : "BUG: Attempted to use proxy "
++ 

[hadoop] branch trunk updated: HADOOP-16650. ITestS3AClosedFS failing.

2019-10-10 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new effe608  HADOOP-16650. ITestS3AClosedFS failing.
effe608 is described below

commit effe6087a5763e087460148965004238c159d287
Author: Steve Loughran 
AuthorDate: Thu Oct 10 17:32:12 2019 +0100

HADOOP-16650. ITestS3AClosedFS failing.

Contributed by Steve Loughran.

Change-Id: Ia9bb84bd6455e210a54cfe9eb944feeda8b58da9
---
 .../src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java  | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
index 592c4be..c8e0d36 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
@@ -1396,13 +1396,17 @@ public final class S3ATestUtils {
   }
 
   /**
-   * Get a set containing the names of all active threads.
+   * Get a set containing the names of all active threads,
+   * stripping out all test runner threads.
* @return the current set of threads.
*/
   public static Set getCurrentThreadNames() {
-return Thread.getAllStackTraces().keySet()
+TreeSet threads = Thread.getAllStackTraces().keySet()
 .stream()
 .map(Thread::getName)
+.filter(n -> n.startsWith("JUnit"))
+.filter(n -> n.startsWith("surefire"))
 .collect(Collectors.toCollection(TreeSet::new));
+return threads;
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-9860. Enable service mode for Docker containers on YARN Contributed by Prabhu Joseph and Shane Kumpf

2019-10-10 Thread eyang
This is an automated email from the ASF dual-hosted git repository.

eyang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 31e0122  YARN-9860. Enable service mode for Docker containers on YARN  
  Contributed by Prabhu Joseph and Shane Kumpf
31e0122 is described below

commit 31e0122f4d4ddc4026470b45d2bf683ece137d44
Author: Eric Yang 
AuthorDate: Thu Oct 10 19:02:02 2019 -0400

YARN-9860. Enable service mode for Docker containers on YARN
   Contributed by Prabhu Joseph and Shane Kumpf
---
 .../yarn/service/api/records/ConfigFile.java   |  28 -
 .../hadoop/yarn/service/client/ServiceClient.java  |  20 +++-
 .../yarn/service/conf/YarnServiceConstants.java|   2 +
 .../yarn/service/provider/ProviderUtils.java   |  41 ++-
 .../provider/tarball/TarballProviderService.java   |   4 +-
 .../hadoop/yarn/service/utils/CoreFileSystem.java  |  17 ++-
 .../yarn/service/utils/SliderFileSystem.java   |  34 ++
 .../yarn/service/provider/TestProviderUtils.java   | 119 +++--
 .../linux/runtime/DockerLinuxContainerRuntime.java |  39 +--
 .../linux/runtime/docker/DockerRunCommand.java |   6 ++
 .../container-executor/impl/container-executor.h   |   6 --
 .../container-executor/impl/utils/docker-util.c|  59 +++---
 .../container-executor/impl/utils/docker-util.h|   4 +-
 .../src/site/markdown/DockerContainers.md  |  23 
 14 files changed, 305 insertions(+), 97 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java
index c09373f..060e204 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ConfigFile.java
@@ -24,6 +24,7 @@ import io.swagger.annotations.ApiModel;
 import io.swagger.annotations.ApiModelProperty;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.yarn.api.records.LocalResourceVisibility;
 
 import javax.xml.bind.annotation.XmlElement;
 import javax.xml.bind.annotation.XmlEnum;
@@ -73,6 +74,7 @@ public class ConfigFile implements Serializable {
   private TypeEnum type = null;
   private String destFile = null;
   private String srcFile = null;
+  private LocalResourceVisibility visibility = null;
   private Map properties = new HashMap<>();
 
   public ConfigFile copy() {
@@ -80,6 +82,7 @@ public class ConfigFile implements Serializable {
 copy.setType(this.getType());
 copy.setSrcFile(this.getSrcFile());
 copy.setDestFile(this.getDestFile());
+copy.setVisibility(this.visibility);
 if (this.getProperties() != null && !this.getProperties().isEmpty()) {
   copy.getProperties().putAll(this.getProperties());
 }
@@ -150,6 +153,26 @@ public class ConfigFile implements Serializable {
 this.srcFile = srcFile;
   }
 
+
+  /**
+   * Visibility of the Config file.
+   **/
+  public ConfigFile visibility(LocalResourceVisibility localrsrcVisibility) {
+this.visibility = localrsrcVisibility;
+return this;
+  }
+
+  @ApiModelProperty(example = "null", value = "Visibility of the Config file")
+  @JsonProperty("visibility")
+  public LocalResourceVisibility getVisibility() {
+return visibility;
+  }
+
+  @XmlElement(name = "visibility", defaultValue="APPLICATION")
+  public void setVisibility(LocalResourceVisibility localrsrcVisibility) {
+this.visibility = localrsrcVisibility;
+  }
+
   /**
A blob of key value pairs that will be dumped in the dest_file in the format
as specified in type. If src_file is specified, src_file content are dumped
@@ -200,12 +223,13 @@ public class ConfigFile implements Serializable {
 return Objects.equals(this.type, configFile.type)
 && Objects.equals(this.destFile, configFile.destFile)
 && Objects.equals(this.srcFile, configFile.srcFile)
+&& Objects.equals(this.visibility, configFile.visibility)
 && Objects.equals(this.properties, configFile.properties);
   }
 
   @Override
   public int hashCode() {
-return Objects.hash(type, destFile, srcFile, properties);
+return Objects.hash(type, destFile, srcFile, visibility, properties);
   }
 
   @Override
@@ -217,6 +241,8 @@ public class ConfigFile implements Serializable {
 .append("destFile: 

[hadoop] branch trunk updated (31e0122 -> 9c72bf4)

2019-10-10 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 31e0122  YARN-9860. Enable service mode for Docker containers on YARN  
  Contributed by Prabhu Joseph and Shane Kumpf
 add 9c72bf4  HDDS-1986. Fix listkeys API. (#1588)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/ozone/om/OmMetadataManagerImpl.java |  89 +++-
 .../hadoop/ozone/om/TestOmMetadataManager.java | 226 +
 .../ozone/om/request/TestOMRequestUtils.java   |  60 --
 3 files changed, 352 insertions(+), 23 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (4850b3a -> 957253f)

2019-10-10 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 4850b3a  HDDS-2269. Provide config for fair/non-fair for OM RW Lock. 
(#1623)
 add 957253f  HDDS-1984. Fix listBucket API. (#1555)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hdds/utils/db/cache/CacheKey.java   |  11 +-
 .../hadoop/hdds/utils/db/cache/TableCacheImpl.java |  12 +-
 .../hadoop/ozone/om/OmMetadataManagerImpl.java |  36 ++--
 .../hadoop/ozone/om/TestOmMetadataManager.java | 191 +
 4 files changed, 233 insertions(+), 17 deletions(-)
 create mode 100644 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (effe608 -> 4850b3a)

2019-10-10 Thread nanda
This is an automated email from the ASF dual-hosted git repository.

nanda pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from effe608  HADOOP-16650. ITestS3AClosedFS failing.
 add 4850b3a  HDDS-2269. Provide config for fair/non-fair for OM RW Lock. 
(#1623)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/ozone/OzoneConfigKeys.java |  3 +++
 .../java/org/apache/hadoop/ozone/lock/ActiveLock.java | 11 +++
 .../org/apache/hadoop/ozone/lock/LockManager.java | 19 ---
 .../apache/hadoop/ozone/lock/PooledLockFactory.java   |  7 ++-
 .../common/src/main/resources/ozone-default.xml   | 11 +++
 .../apache/hadoop/ozone/om/lock/OzoneManagerLock.java |  7 ++-
 6 files changed, 49 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14373. EC : Decoding is failing when block group last incomplete cell fall in to AlignedStripe. Contributed by Surendra Singh Lilhore.

2019-10-10 Thread surendralilhore
This is an automated email from the ASF dual-hosted git repository.

surendralilhore pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 626a48d   HDFS-14373. EC : Decoding is failing when block group last 
incomplete cell fall in to AlignedStripe. Contributed by Surendra Singh Lilhore.
626a48d is described below

commit 626a48d47239a4719cd6851cd279e6308490a16d
Author: Surendra Singh Lilhore 
AuthorDate: Fri Oct 11 00:09:20 2019 +0530

 HDFS-14373. EC : Decoding is failing when block group last incomplete cell 
fall in to AlignedStripe. Contributed by Surendra Singh Lilhore.
---
 .../java/org/apache/hadoop/hdfs/StripeReader.java  |  4 ++
 .../apache/hadoop/hdfs/util/StripedBlockUtil.java  | 20 +++--
 .../hadoop/hdfs/TestDFSStripedInputStream.java | 47 ++
 3 files changed, 68 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
index 168b48c..e840da9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
@@ -247,6 +247,8 @@ abstract class StripeReader {
   DFSClient.LOG.warn("Found Checksum error for "
   + currentBlock + " from " + currentNode
   + " at " + ce.getPos());
+  //Clear buffer to make next decode success
+  strategy.getReadBuffer().clear();
   // we want to remember which block replicas we have tried
   corruptedBlocks.addCorruptedBlock(currentBlock, currentNode);
   throw ce;
@@ -254,6 +256,8 @@ abstract class StripeReader {
   DFSClient.LOG.warn("Exception while reading from "
   + currentBlock + " of " + dfsStripedInputStream.getSrc() + " from "
   + currentNode, e);
+  //Clear buffer to make next decode success
+  strategy.getReadBuffer().clear();
   throw e;
 }
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
index 4c2ff92..2b09a7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
@@ -355,7 +355,8 @@ public class StripedBlockUtil {
 cells);
 
 // Step 3: merge into stripes
-AlignedStripe[] stripes = mergeRangesForInternalBlocks(ecPolicy, ranges);
+AlignedStripe[] stripes = mergeRangesForInternalBlocks(ecPolicy, ranges,
+blockGroup, cellSize);
 
 // Step 4: calculate each chunk's position in destination buffer. Since the
 // whole read range is within a single stripe, the logic is simpler here.
@@ -416,7 +417,8 @@ public class StripedBlockUtil {
 cells);
 
 // Step 3: merge into at most 5 stripes
-AlignedStripe[] stripes = mergeRangesForInternalBlocks(ecPolicy, ranges);
+AlignedStripe[] stripes = mergeRangesForInternalBlocks(ecPolicy, ranges,
+blockGroup, cellSize);
 
 // Step 4: calculate each chunk's position in destination buffer
 calcualteChunkPositionsInBuf(cellSize, stripes, cells, buf);
@@ -512,7 +514,8 @@ public class StripedBlockUtil {
* {@link AlignedStripe} instances.
*/
   private static AlignedStripe[] mergeRangesForInternalBlocks(
-  ErasureCodingPolicy ecPolicy, VerticalRange[] ranges) {
+  ErasureCodingPolicy ecPolicy, VerticalRange[] ranges,
+  LocatedStripedBlock blockGroup, int cellSize) {
 int dataBlkNum = ecPolicy.getNumDataUnits();
 int parityBlkNum = ecPolicy.getNumParityUnits();
 List stripes = new ArrayList<>();
@@ -524,6 +527,17 @@ public class StripedBlockUtil {
   }
 }
 
+// Add block group last cell offset in stripePoints if it is fall in to 
read
+// offset range.
+int lastCellIdxInBG = (int) (blockGroup.getBlockSize() / cellSize);
+int idxInInternalBlk = lastCellIdxInBG / ecPolicy.getNumDataUnits();
+long lastCellEndOffset = (idxInInternalBlk * (long)cellSize)
++ (blockGroup.getBlockSize() % cellSize);
+if (stripePoints.first() < lastCellEndOffset
+&& stripePoints.last() > lastCellEndOffset) {
+  stripePoints.add(lastCellEndOffset);
+}
+
 long prev = -1;
 for (long point : stripePoints) {
   if (prev >= 0) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
index 48ecf9a..d50d482 100644
--- 

[hadoop] branch trunk updated: HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos.

2019-10-10 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7a4b3d4  HADOOP-15870. S3AInputStream.remainingInFile should use 
nextReadPos.
7a4b3d4 is described below

commit 7a4b3d42c4e36e468c2a46fd48036a6fed547853
Author: lqjacklee 
AuthorDate: Thu Oct 10 21:58:42 2019 +0100

HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos.

Contributed by lqjacklee.

Change-Id: I32bb00a683102e7ff8ff8ce0b8d9c3195ca7381c
---
 .../site/markdown/filesystem/fsdatainputstream.md  | 53 +++
 .../fs/contract/AbstractContractSeekTest.java  | 75 --
 .../org/apache/hadoop/fs/s3a/S3AInputStream.java   | 29 ++---
 .../fs/contract/s3a/ITestS3AContractSeek.java  |  2 +-
 4 files changed, 143 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
index 0906964..b8f9e87 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
@@ -119,6 +119,59 @@ Return the data at the current position.
 else
 result = -1
 
+###  `InputStream.available()`
+
+Returns the number of bytes "estimated" to be readable on a stream before 
`read()`
+blocks on any IO (i.e. the thread is potentially suspended for some time).
+
+That is: for all values `v` returned by `available()`, `read(buffer, 0, v)`
+is should not block.
+
+ Postconditions
+
+```python
+if len(data) == 0:
+  result = 0
+
+elif pos >= len(data):
+  result = 0
+
+else:
+  d = "the amount of data known to be already buffered/cached locally"
+  result = min(1, d)  # optional but recommended: see below.
+```
+
+As `0` is a number which is always meets this condition, it is nominally
+possible for an implementation to simply return `0`. However, this is not
+considered useful, and some applications/libraries expect a positive number.
+
+ The GZip problem.
+
+[JDK-7036144](http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7036144),
+"GZIPInputStream readTrailer uses faulty available() test for end-of-stream"
+discusses how the JDK's GZip code it uses `available()` to detect an EOF,
+in a loop similar to the the following
+
+```java
+while(instream.available()) {
+  process(instream.read());
+}
+```
+
+The correct loop would have been:
+
+```java
+int r;
+while((r=instream.read()) >= 0) {
+  process(r);
+}
+```
+
+If `available()` ever returns 0, then the gzip loop halts prematurely.
+
+For this reason, implementations *should* return a value =1, even
+if it breaks that requirement of `available()` returning the amount guaranteed
+not to block on reads.
 
 ###  `InputStream.read(buffer[], 
offset, length)`
 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java
index ca8e4a0..db36916 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java
@@ -32,6 +32,7 @@ import java.io.EOFException;
 import java.io.IOException;
 import java.util.Random;
 
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.createFile;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.skip;
@@ -99,14 +100,18 @@ public abstract class AbstractContractSeekTest extends 
AbstractFSContractTestBas
 describe("seek and read a 0 byte file");
 instream = getFileSystem().open(zeroByteFile);
 assertEquals(0, instream.getPos());
+assertAvailableIsZero(instream);
 //expect initial read to fai;
 int result = instream.read();
 assertMinusOne("initial byte read", result);
+assertAvailableIsZero(instream);
 byte[] buffer = new byte[1];
 //expect that seek to 0 works
 instream.seek(0);
+assertAvailableIsZero(instream);
 //reread, expect same exception
 result = instream.read();
+assertAvailableIsZero(instream);
 assertMinusOne("post-seek byte read", result);
 result = instream.read(buffer, 0, 1);
 assertMinusOne("post-seek buffer read", result);
@@ -132,8 +137,8 @@ public abstract class AbstractContractSeekTest extends 
AbstractFSContractTestBas
   @Test
   public void testSeekReadClosedFile() throws Throwable {
 instream = 

[hadoop] branch branch-2 updated: HDFS-14509. DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x. Contributed by Yuxuan Wang and Konstantin Shvachko.

2019-10-10 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 6566402  HDFS-14509. DN throws InvalidToken due to inequality of 
password when upgrade NN 2.x to 3.x. Contributed by Yuxuan Wang and Konstantin 
Shvachko.
6566402 is described below

commit 6566402a1b22d2fa8311db7c7583f7200a3de88d
Author: Chen Liang 
AuthorDate: Thu Oct 10 13:29:30 2019 -0700

HDFS-14509. DN throws InvalidToken due to inequality of password when 
upgrade NN 2.x to 3.x. Contributed by Yuxuan Wang and Konstantin Shvachko.
---
 .../security/token/block/BlockTokenIdentifier.java | 13 ++
 .../hdfs/security/token/block/TestBlockToken.java  | 50 ++
 2 files changed, 63 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java
index 87c831a..4d2037b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenIdentifier.java
@@ -19,12 +19,14 @@
 package org.apache.hadoop.hdfs.security.token.block;
 
 import java.io.DataInput;
+import java.io.DataInputStream;
 import java.io.DataOutput;
 import java.io.EOFException;
 import java.io.IOException;
 import java.util.EnumSet;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -116,6 +118,7 @@ public class BlockTokenIdentifier extends TokenIdentifier {
   }
 
   public void setHandshakeMsg(byte[] bytes) {
+cache = null; // invalidate the cache
 handshakeMsg = bytes;
   }
 
@@ -159,6 +162,16 @@ public class BlockTokenIdentifier extends TokenIdentifier {
   @Override
   public void readFields(DataInput in) throws IOException {
 this.cache = null;
+if (in instanceof DataInputStream) {
+  final DataInputStream dis = (DataInputStream) in;
+  // this.cache should be assigned the raw bytes from the input data for
+  // upgrading compatibility. If we won't mutate fields and call getBytes()
+  // for something (e.g retrieve password), we should return the raw bytes
+  // instead of serializing the instance self fields to bytes, because we
+  // may lose newly added fields which we can't recognize.
+  this.cache = IOUtils.readFullyToByteArray(dis);
+  dis.reset();
+}
 expiryDate = WritableUtils.readVLong(in);
 keyId = WritableUtils.readVInt(in);
 userId = WritableUtils.readString(in);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
index 7d0c90f..07aaf09 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.hdfs.security.token.block;
 
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
@@ -30,6 +31,7 @@ import java.io.ByteArrayInputStream;
 import java.io.DataInputStream;
 import java.io.File;
 import java.io.IOException;
+import java.io.DataOutput;
 import java.net.InetSocketAddress;
 import java.util.EnumSet;
 import java.util.Set;
@@ -76,6 +78,7 @@ import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.mockito.Mockito;
 import org.mockito.invocation.InvocationOnMock;
 import org.mockito.stubbing.Answer;
 
@@ -435,4 +438,51 @@ public class TestBlockToken {
   }
 }
   }
+
+  @Test
+  public void testRetrievePasswordWithUnknownFields() throws IOException {
+BlockTokenIdentifier id = new BlockTokenIdentifier();
+BlockTokenIdentifier spyId = Mockito.spy(id);
+Mockito.doAnswer(new Answer() {
+  @Override
+  public Void answer(InvocationOnMock invocation) throws Throwable {
+DataOutput out = (DataOutput) invocation.getArguments()[0];
+invocation.callRealMethod();
+// write something at the end that BlockTokenIdentifier#readFields()
+// will ignore, but which is still a part of the 

[hadoop] branch trunk updated (9c72bf4 -> f267917)

2019-10-10 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 9c72bf4  HDDS-1986. Fix listkeys API. (#1588)
 add f267917  HDDS-2282. scmcli pipeline list command throws 
NullPointerException. Contributed by Xiaoyu Yao. (#1642)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hdds/scm/XceiverClientManager.java   |  4 +++-
 .../java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java | 21 +++--
 .../dist/src/main/compose/ozonesecure/test.sh   |  2 ++
 .../{kinit.robot => scmcli/pipeline.robot}  | 14 +++---
 4 files changed, 31 insertions(+), 10 deletions(-)
 copy hadoop-ozone/dist/src/main/smoketest/{kinit.robot => 
scmcli/pipeline.robot} (74%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org