[GitHub] [hadoop] bshashikant commented on a change in pull request #1313: HDFS-13118. SnapshotDiffReport should provide the INode type.

2019-08-21 Thread GitBox
bshashikant commented on a change in pull request #1313: HDFS-13118. 
SnapshotDiffReport should provide the INode type.
URL: https://github.com/apache/hadoop/pull/1313#discussion_r316506315
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffListingInfo.java
 ##
 @@ -156,6 +163,17 @@ boolean addDirDiff(long dirId, byte[][] parent, 
ChildrenDiff diff) {
 return fullPath;
   }
 
+  private static DiffReportListingEntry.INodeType fromINode(INode inode) {
 
 Review comment:
   can we also rename the routine to getInodeType()?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1278: HDDS-1950. S3 MPU part-list call fails if there are no parts

2019-08-21 Thread GitBox
lokeshj1703 commented on issue #1278: HDDS-1950. S3 MPU part-list call fails if 
there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523759252
 
 
   @elek Thanks for updating the PR! The changes look good to me. There is 
acceptance test failure which I am not able to check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1313: HDFS-13118. SnapshotDiffReport should provide the INode type.

2019-08-21 Thread GitBox
bshashikant commented on a change in pull request #1313: HDFS-13118. 
SnapshotDiffReport should provide the INode type.
URL: https://github.com/apache/hadoop/pull/1313#discussion_r316505447
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
 ##
 @@ -75,36 +75,55 @@ public static DiffType parseDiffType(String s){
 }
   }
 
+  /**
+   * INodeType specifies the type of INode: FILE, DIRECTORY, or SYMLINK.
+   */
+  public enum INodeType {
+FILE,
+DIRECTORY,
+SYMLINK;
+
+public static INodeType parseINodeType(String s) {
+  return INodeType.valueOf(s.toUpperCase());
+}
+  }
+
   /**
* Representing the full path and diff type of a file/directory where changes
* have happened.
*/
   public static class DiffReportEntry {
 /** The type of the difference. */
 private final DiffType type;
 
 Review comment:
   Let's be consistent with the naming of the variable  "iNodeType" to 
inodeType.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1278: HDDS-1950. S3 MPU part-list call fails if there are no parts

2019-08-21 Thread GitBox
lokeshj1703 commented on a change in pull request #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#discussion_r316503317
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -0,0 +1,93 @@
+package org.apache.hadoop.ozone.om;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs.Builder;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartInfo;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartUploadListParts;
+import org.apache.hadoop.ozone.security.OzoneBlockTokenSecretManager;
+import org.apache.hadoop.test.GenericTestUtils;
+
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+/**
+ * Unit test key manager.
+ */
+public class TestKeyManagerUnit {
 
 Review comment:
   Yes, that makes sense. We can move some unit tests which do not require scm 
and other components involvement later into this test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #1319: HDDS-1981: Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-21 Thread GitBox
bshashikant commented on issue #1319: HDDS-1981: Datanode should sync db when 
container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319#issuecomment-523756239
 
 
   Thanks @lokeshj1703 for working on this. I think running explicit compaction 
before close might make it heavy operation. Can we run compaction in the 
background or in a separate thread after closing the container?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16061) Update Apache Yetus to 0.10.0

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912989#comment-16912989
 ] 

Hudson commented on HADOOP-16061:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17167 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17167/])
HADOOP-16061. Upgrade Yetus to 0.10.0 (aajisaka: rev 
5e156b9ddec46d6b7d1336bb88136d8826972e7a)
* (edit) dev-support/bin/yetus-wrapper


> Update Apache Yetus to 0.10.0
> -
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
>
> Yetus 0.10.0 is out. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #1328: HDDS-1998. TestSecureContainerServer#testClientServerRatisGrpc is fai…

2019-08-21 Thread GitBox
bshashikant commented on issue #1328: HDDS-1998. 
TestSecureContainerServer#testClientServerRatisGrpc is fai…
URL: https://github.com/apache/hadoop/pull/1328#issuecomment-523755123
 
 
   Thanks @pingsutw for working on this. The test while asserting the exception 
type to be of type IOException also needs to validate the underlying exception 
is StorageContainerException with code BLOCK_TOKEN_VERIFICATION_FAILURE. Can 
you please add this as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on issue #1320: HDFS-14755. [Dynamometer] Hadoop-2 DataNode fail to start

2019-08-21 Thread GitBox
tasanuma commented on issue #1320: HDFS-14755. [Dynamometer] Hadoop-2 DataNode 
fail to start
URL: https://github.com/apache/hadoop/pull/1320#issuecomment-523754821
 
 
   Thanks for your explanation, @xkrogen.
   
   I see. I'm not sure why my job worked, but it should be one directory there. 
So if we have multiple storage locations, we should use the first of them here, 
right? 
   
   Updated the PR. It handles that case while keeping the compatibility between 
hadoop-2 and hadoop-3.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1333: HDDS-2008 : Wrong package for RatisHelper class in hadoop-hdds/common…

2019-08-21 Thread GitBox
avijayanhwx commented on issue #1333: HDDS-2008 : Wrong package for RatisHelper 
class in hadoop-hdds/common…
URL: https://github.com/apache/hadoop/pull/1333#issuecomment-523753156
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #1333: HDDS-2008 : Wrong package for RatisHelper class in hadoop-hdds/common…

2019-08-21 Thread GitBox
avijayanhwx opened a new pull request #1333: HDDS-2008 : Wrong package for 
RatisHelper class in hadoop-hdds/common…
URL: https://github.com/apache/hadoop/pull/1333
 
 
   … module.
   
   It is currently org.apache.ratis.RatisHelper.
   
   It should be org.apache.hadoop.hdds.ratis.RatisHelper.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1323: HDDS-1094. Performance test infrastructure : skip writing user data on Datanode. Contributed by Supratim Deka

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #1323: HDDS-1094. Performance test 
infrastructure : skip writing user data on Datanode. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1323#discussion_r316494881
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -251,6 +252,12 @@ public void init(OzoneConfiguration configuration) throws 
IOException {
   @Override
   public Void call() throws Exception {
 if (ozoneConfiguration != null) {
+  if 
(ozoneConfiguration.getBoolean(HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA,
 
 Review comment:
   Is this required because you are added a test for freon with the null 
ChunkManager?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1323: HDDS-1094. Performance test infrastructure : skip writing user data on Datanode. Contributed by Supratim Deka

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #1323: HDDS-1094. Performance test 
infrastructure : skip writing user data on Datanode. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1323#discussion_r316493928
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.keyvalue.impl;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA_DEFAULT;
+
+/**
+ * Select an appropriate ChunkManager implementation as per config setting.
+ * Ozone ChunkManager is a Singleton
+ */
+public final class ChunkManagerFactory {
+  static final Logger LOG = LoggerFactory.getLogger(ChunkManagerFactory.class);
+
+  private static ChunkManager instance = null;
+  private static boolean syncChunks = false;
+
+  private ChunkManagerFactory() {
+  }
+
+  public static ChunkManager getChunkManager(Configuration config,
+  boolean sync) {
+if (instance == null) {
+  synchronized (ChunkManagerFactory.class) {
+if (instance == null) {
+  instance = createChunkManager(config, sync);
+  syncChunks = sync;
+}
+  }
+}
+
+Preconditions.checkArgument((syncChunks == sync),
+"value of sync conflicts with previous invocation");
+return instance;
+  }
+
+  private static ChunkManager createChunkManager(Configuration config,
+  boolean sync) {
+ChunkManager manager = null;
+boolean persist = config.getBoolean(HDDS_CONTAINER_PERSISTDATA,
+HDDS_CONTAINER_PERSISTDATA_DEFAULT);
+
+if (persist == false) {
+  boolean scrubber = config.getBoolean(
+  HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED,
+  HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED_DEFAULT);
+  if (scrubber) {
+// Data Scrubber needs to be disabled for non-persistent chunks.
+LOG.warn("Failed to set " + HDDS_CONTAINER_PERSISTDATA + " to false."
++ " Please set " + HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED
++ " also to false to enable non-persistent containers.");
+persist = true;
+  }
+}
+
+if (persist == true) {
+  manager = new ChunkManagerImpl(sync);
+} else {
+  LOG.warn(HDDS_CONTAINER_PERSISTDATA
 
 Review comment:
   Also augment this message to say that if this setting should never be 
enabled outside of a test environment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1323: HDDS-1094. Performance test infrastructure : skip writing user data on Datanode. Contributed by Supratim Deka

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #1323: HDDS-1094. Performance test 
infrastructure : skip writing user data on Datanode. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1323#discussion_r316493799
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.keyvalue.impl;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA_DEFAULT;
+
+/**
+ * Select an appropriate ChunkManager implementation as per config setting.
+ * Ozone ChunkManager is a Singleton
+ */
+public final class ChunkManagerFactory {
+  static final Logger LOG = LoggerFactory.getLogger(ChunkManagerFactory.class);
+
+  private static ChunkManager instance = null;
+  private static boolean syncChunks = false;
+
+  private ChunkManagerFactory() {
+  }
+
+  public static ChunkManager getChunkManager(Configuration config,
+  boolean sync) {
+if (instance == null) {
+  synchronized (ChunkManagerFactory.class) {
+if (instance == null) {
+  instance = createChunkManager(config, sync);
+  syncChunks = sync;
+}
+  }
+}
+
+Preconditions.checkArgument((syncChunks == sync),
+"value of sync conflicts with previous invocation");
+return instance;
+  }
+
+  private static ChunkManager createChunkManager(Configuration config,
+  boolean sync) {
+ChunkManager manager = null;
+boolean persist = config.getBoolean(HDDS_CONTAINER_PERSISTDATA,
+HDDS_CONTAINER_PERSISTDATA_DEFAULT);
+
+if (persist == false) {
+  boolean scrubber = config.getBoolean(
+  HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED,
+  HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED_DEFAULT);
+  if (scrubber) {
+// Data Scrubber needs to be disabled for non-persistent chunks.
+LOG.warn("Failed to set " + HDDS_CONTAINER_PERSISTDATA + " to false."
++ " Please set " + HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED
++ " also to false to enable non-persistent containers.");
+persist = true;
+  }
+}
+
+if (persist == true) {
 
 Review comment:
   `if (persist)1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1323: HDDS-1094. Performance test infrastructure : skip writing user data on Datanode. Contributed by Supratim Deka

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #1323: HDDS-1094. Performance test 
infrastructure : skip writing user data on Datanode. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1323#discussion_r316493799
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.keyvalue.impl;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA_DEFAULT;
+
+/**
+ * Select an appropriate ChunkManager implementation as per config setting.
+ * Ozone ChunkManager is a Singleton
+ */
+public final class ChunkManagerFactory {
+  static final Logger LOG = LoggerFactory.getLogger(ChunkManagerFactory.class);
+
+  private static ChunkManager instance = null;
+  private static boolean syncChunks = false;
+
+  private ChunkManagerFactory() {
+  }
+
+  public static ChunkManager getChunkManager(Configuration config,
+  boolean sync) {
+if (instance == null) {
+  synchronized (ChunkManagerFactory.class) {
+if (instance == null) {
+  instance = createChunkManager(config, sync);
+  syncChunks = sync;
+}
+  }
+}
+
+Preconditions.checkArgument((syncChunks == sync),
+"value of sync conflicts with previous invocation");
+return instance;
+  }
+
+  private static ChunkManager createChunkManager(Configuration config,
+  boolean sync) {
+ChunkManager manager = null;
+boolean persist = config.getBoolean(HDDS_CONTAINER_PERSISTDATA,
+HDDS_CONTAINER_PERSISTDATA_DEFAULT);
+
+if (persist == false) {
+  boolean scrubber = config.getBoolean(
+  HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED,
+  HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED_DEFAULT);
+  if (scrubber) {
+// Data Scrubber needs to be disabled for non-persistent chunks.
+LOG.warn("Failed to set " + HDDS_CONTAINER_PERSISTDATA + " to false."
++ " Please set " + HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED
++ " also to false to enable non-persistent containers.");
+persist = true;
+  }
+}
+
+if (persist == true) {
 
 Review comment:
   `if (persist)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1323: HDDS-1094. Performance test infrastructure : skip writing user data on Datanode. Contributed by Supratim Deka

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #1323: HDDS-1094. Performance test 
infrastructure : skip writing user data on Datanode. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1323#discussion_r316493701
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.keyvalue.impl;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA_DEFAULT;
+
+/**
+ * Select an appropriate ChunkManager implementation as per config setting.
+ * Ozone ChunkManager is a Singleton
+ */
+public final class ChunkManagerFactory {
+  static final Logger LOG = LoggerFactory.getLogger(ChunkManagerFactory.class);
+
+  private static ChunkManager instance = null;
+  private static boolean syncChunks = false;
+
+  private ChunkManagerFactory() {
+  }
+
+  public static ChunkManager getChunkManager(Configuration config,
+  boolean sync) {
+if (instance == null) {
+  synchronized (ChunkManagerFactory.class) {
+if (instance == null) {
+  instance = createChunkManager(config, sync);
+  syncChunks = sync;
+}
+  }
+}
+
+Preconditions.checkArgument((syncChunks == sync),
+"value of sync conflicts with previous invocation");
+return instance;
+  }
+
+  private static ChunkManager createChunkManager(Configuration config,
+  boolean sync) {
+ChunkManager manager = null;
+boolean persist = config.getBoolean(HDDS_CONTAINER_PERSISTDATA,
+HDDS_CONTAINER_PERSISTDATA_DEFAULT);
+
+if (persist == false) {
 
 Review comment:
   `!persist`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1323: HDDS-1094. Performance test infrastructure : skip writing user data on Datanode. Contributed by Supratim Deka

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #1323: HDDS-1094. Performance test 
infrastructure : skip writing user data on Datanode. Contributed by Supratim 
Deka
URL: https://github.com/apache/hadoop/pull/1323#discussion_r316493551
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
 ##
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.keyvalue.impl;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA_DEFAULT;
+
+/**
+ * Select an appropriate ChunkManager implementation as per config setting.
+ * Ozone ChunkManager is a Singleton
+ */
+public final class ChunkManagerFactory {
+  static final Logger LOG = LoggerFactory.getLogger(ChunkManagerFactory.class);
+
+  private static ChunkManager instance = null;
+  private static boolean syncChunks = false;
+
+  private ChunkManagerFactory() {
+  }
+
+  public static ChunkManager getChunkManager(Configuration config,
+  boolean sync) {
+if (instance == null) {
 
 Review comment:
   Let's remove this null check.
   
   https://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
   
   In Java, if you are using this pattern, then `instance` should be volatile.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16527) Add a whitelist of endpoints to allow simple authentication even if security is enabled

2019-08-21 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16527:
--

 Summary: Add a whitelist of endpoints to allow simple 
authentication even if security is enabled
 Key: HADOOP-16527
 URL: https://issues.apache.org/jira/browse/HADOOP-16527
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Akira Ajisaka


HADOOP-15707 added /isActive servlet for load balancers and HADOOP-16398 added 
/prom servlet for prometheus server. However, prometheus server and most load 
balancers do not support kerberos authentication. Therefore if kerberos 
authentication is enabled, prometheus server or load balancers cannot access to 
the endpoints.

We would like to propose a whitelist of endpoints to avoid kerberos 
authentication.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1298: HADOOP-16061. Upgrade Yetus to 0.10.0

2019-08-21 Thread GitBox
aajisaka commented on issue #1298: HADOOP-16061. Upgrade Yetus to 0.10.0
URL: https://github.com/apache/hadoop/pull/1298#issuecomment-523738639
 
 
   Committed. Thank you, @steveloughran 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16061) Update Apache Yetus to 0.10.0

2019-08-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16061.

Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Resolution: Fixed

Committed to trunk, branch-3.2, and branch-3.1. Closing.

> Update Apache Yetus to 0.10.0
> -
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
>
> Yetus 0.10.0 is out. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka closed pull request #1298: HADOOP-16061. Upgrade Yetus to 0.10.0

2019-08-21 Thread GitBox
aajisaka closed pull request #1298: HADOOP-16061. Upgrade Yetus to 0.10.0
URL: https://github.com/apache/hadoop/pull/1298
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16448) Connection to Hadoop homepage is not secure

2019-08-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16448:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Fixed by https://github.com/apache/hadoop-site/pull/7

> Connection to Hadoop homepage is not secure
> ---
>
> Key: HADOOP-16448
> URL: https://issues.apache.org/jira/browse/HADOOP-16448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.23.11, 2.4.0, 2.5.0, 2.4.1, 2.5.1, 2.5.2, 2.6.0, 
> 2.6.1, 2.7.0, 2.8.0, 2.7.1, 2.7.2, 2.6.2, 2.6.3, 2.7.3, 2.9.0, 2.6.4, 2.6.5, 
> 2.7.4, 2.8.1, 2.8.2, 2.8.3, 2.7.5, 3.0.0, 3.1.0, 2.9.1, 3.0.1, 2.8.4, 2.7.6, 
> 3.2.0, 3.0.2, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Kaspar Tint
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Attachments: Screen Shot 2019-07-23 at 9.37.54 AM.png
>
>
> When visiting the [Hadoop 
> website|https://hadoop.apache.org/docs/r3.2.0/index.html] with the latest 
> Firefox browser (v 68.0.1) it appears that the website cannot be reached 
> through secure means by default.
> The culprit seems to be the fact that the two header images presented on the 
> page are loaded in via *HTTP*
>  !Screen Shot 2019-07-23 at 9.37.54 AM.png!.
> These images are located in the respective locations:
> http://hadoop.apache.org/images/hadoop-logo.jpg
> http://www.apache.org/images/asf_logo_wide.png
> These images can be reached also from the following locations:
> https://hadoop.apache.org/images/hadoop-logo.jpg
> https://www.apache.org/images/asf_logo_wide.png
> As one can see, a fix could be made to use a more safe way of including in 
> the two header pictures to the page.
> I feel like I am in danger when reading the Hadoop documentation from the 
> official Hadoop webpage in a non secure way. Thus I felt the need to open 
> this ticket and raise the issue in order to have a future where everyone can 
> learn from Hadoop documentation in a safe and secure way.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1130: HDDS-1827. Load Snapshot info when OM Ratis server starts.

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #1130: HDDS-1827. Load Snapshot info 
when OM Ratis server starts.
URL: https://github.com/apache/hadoop/pull/1130#discussion_r316474797
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
 ##
 @@ -1063,4 +1066,101 @@ static String createKey(OzoneBucket ozoneBucket) 
throws IOException {
 ozoneOutputStream.close();
 return keyName;
   }
+
+  @Test
+  public void testOMRestart() throws Exception {
+// Get the leader OM
+String leaderOMNodeId = objectStore.getClientProxy().getOMProxyProvider()
+.getCurrentProxyOMNodeId();
+OzoneManager leaderOM = cluster.getOzoneManager(leaderOMNodeId);
+
+// Get follower OMs
+OzoneManager followerOM1 = cluster.getOzoneManager(
+leaderOM.getPeerNodes().get(0).getOMNodeId());
+OzoneManager followerOM2 = cluster.getOzoneManager(
+leaderOM.getPeerNodes().get(1).getOMNodeId());
+
+// Do some transactions so that the log index increases
+String userName = "user" + RandomStringUtils.randomNumeric(5);
+String adminName = "admin" + RandomStringUtils.randomNumeric(5);
+String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+String bucketName = "bucket" + RandomStringUtils.randomNumeric(5);
+
+VolumeArgs createVolumeArgs = VolumeArgs.newBuilder()
+.setOwner(userName)
+.setAdmin(adminName)
+.build();
+
+objectStore.createVolume(volumeName, createVolumeArgs);
+OzoneVolume retVolumeinfo = objectStore.getVolume(volumeName);
+
+retVolumeinfo.createBucket(bucketName);
+OzoneBucket ozoneBucket = retVolumeinfo.getBucket(bucketName);
+
+for (int i = 0; i < 10; i++) {
+  createKey(ozoneBucket);
+}
+
+long lastAppliedTxOnFollowerOM =
+followerOM1.getOmRatisServer().getStateMachineLastAppliedIndex();
+
+// Stop one follower OM
+followerOM1.stop();
+
+// Do more transactions. Stopped OM should miss these transactions and
+// the logs corresponding to atleast some of the missed transactions
+// should be purged. This will force the OM to install snapshot when
+// restarted.
+long minNewTxIndex = lastAppliedTxOnFollowerOM + (LOG_PURGE_GAP * 10);
 
 Review comment:
   There is no guarantee that the purge has occurred right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16485) Remove dependency on jackson

2019-08-21 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912860#comment-16912860
 ] 

Duo Zhang commented on HADOOP-16485:


OK, sometimes in the past we upgraded the jackson dependencies from 1.x to 2.x, 
and I believe this also breaks the compatibility. So what is the rule? We can 
only do this for major release or even minor release?

> Remove dependency on jackson
> 
>
> Key: HADOOP-16485
> URL: https://issues.apache.org/jira/browse/HADOOP-16485
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Looking at git history, there were 5 commits related to updating jackson 
> versions due to various CVEs since 2018. And it seems to get worse more 
> recently.
> File this jira to discuss the possibility of removing jackson dependency once 
> for all. I see that jackson is deeply integrated into Hadoop codebase, so not 
> a trivial task. However, if Hadoop is forced to make a new set of releases 
> because of Jackson vulnerabilities, it may start to look not so costly.
> At the very least, consider stripping jackson-databind coode, since that's 
> where the majority of CVEs come from.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912858#comment-16912858
 ] 

Hudson commented on HADOOP-16494:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17165 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17165/])
HADOOP-16494. Add SHA-512 checksum to release artifact to comply with 
(aajisaka: rev 34dd9ee36674be670013d4fc3d9b8f5b36886812)
* (edit) dev-support/bin/create-release


> Add SHA-256 or SHA-512 checksum to release artifacts to comply with the 
> release distribution policy
> ---
>
> Key: HADOOP-16494
> URL: https://issues.apache.org/jira/browse/HADOOP-16494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
>
> Originally reported by [~ctubbsii]: 
> https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E
> bq. None of the artifacts seem to have valid detached checksum files that are 
> in compliance with https://www.apache.org/dev/release-distribution There 
> should be some ".shaXXX" files in there, and not just the (optional) ".mds" 
> files.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16494) Add SHA-256 or SHA-512 checksum to release artifacts to comply with the release distribution policy

2019-08-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16494:
---
Fix Version/s: 3.1.3
   2.9.3
   3.2.1
   2.8.6
   3.3.0
   2.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, branch-3.1, branch-2, branch-2.9, and 
branch-2.8.

> Add SHA-256 or SHA-512 checksum to release artifacts to comply with the 
> release distribution policy
> ---
>
> Key: HADOOP-16494
> URL: https://issues.apache.org/jira/browse/HADOOP-16494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
>
> Originally reported by [~ctubbsii]: 
> https://lists.apache.org/thread.html/db2f5d5d8600c405293ebfb3bfc415e200e59f72605c5a920a461c09@%3Cgeneral.hadoop.apache.org%3E
> bq. None of the artifacts seem to have valid detached checksum files that are 
> in compliance with https://www.apache.org/dev/release-distribution There 
> should be some ".shaXXX" files in there, and not just the (optional) ".mds" 
> files.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka closed pull request #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy

2019-08-21 Thread GitBox
aajisaka closed pull request #1243: HADOOP-16494. Add SHA-512 checksum to 
release artifact to comply with the release distribution policy
URL: https://github.com/apache/hadoop/pull/1243
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1243: HADOOP-16494. Add SHA-512 checksum to release artifact to comply with the release distribution policy

2019-08-21 Thread GitBox
aajisaka commented on issue #1243: HADOOP-16494. Add SHA-512 checksum to 
release artifact to comply with the release distribution policy
URL: https://github.com/apache/hadoop/pull/1243#issuecomment-523715860
 
 
   Thanks @ctubbsii and @steveloughran for reviewing this. Committed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16526) Support LDAP authenticaition (bind) via GSSAPI

2019-08-21 Thread Todd Lipcon (Jira)
Todd Lipcon created HADOOP-16526:


 Summary: Support LDAP authenticaition (bind) via GSSAPI
 Key: HADOOP-16526
 URL: https://issues.apache.org/jira/browse/HADOOP-16526
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Todd Lipcon


Currently the LDAP group mapping provider only supports simple (user/password) 
authentication. In some cases it's more convenient to use GSSAPI (kerberos) 
authentication here, particularly when the server doing the mapping is already 
using a keytab provided by the same instance (eg IPA or AD). We should provide 
a configuration to turn on GSSAPI and put the right UGI 'doAs' calls in place 
to ensure an appropriate Subject in those calls.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-21 Thread GitBox
anuengineer commented on issue #1263: HDDS-1927. Consolidate add/remove Acl 
into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-523691615
 
 
   Committed to trunk. Thanks for the reviews and Contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-21 Thread GitBox
anuengineer closed pull request #1263: HDDS-1927. Consolidate add/remove Acl 
into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-21 Thread GitBox
anuengineer commented on issue #1263: HDDS-1927. Consolidate add/remove Acl 
into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-523684615
 
 
   +1, I have verified the test failures and at a high level does not seem to 
be related to this patch.  I am going to commit this patch now. We can follow 
up with other JIRAs if needed to fix the test issues if we discover they are 
indeed related to this.
   @xiaoyuyao @bharatviswa504 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1329: HDDS-738. Removing REST protocol support from OzoneClient

2019-08-21 Thread GitBox
anuengineer commented on issue #1329: HDDS-738. Removing REST protocol support 
from OzoneClient
URL: https://github.com/apache/hadoop/pull/1329#issuecomment-523683749
 
 
   There are some minor issues, otherwise, it looks quite good.
   1. Unit test failure looks connected to this patch.
 ```
  [ERROR] Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 
2.141 s <<< FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
   [ERROR] 
testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
Time elapsed: 0.58 s  <<< FAILURE!
   java.lang.AssertionError: 
Expected to find 'Couldn't create protocol ' but got unexpected exception: 
java.io.IOException: Couldn't create RpcClient protocol
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:207)
  
   ```
   
   2. The Checkstyle issues -- mostly unused imports.
   3. One minor JavaDoc.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912758#comment-16912758
 ] 

Hadoop QA commented on HADOOP-16517:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  9s{color} | {color:orange} root: The patch generated 3 new + 560 unchanged 
- 0 fixed = 563 total (was 560) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
51s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16517 |
| JIRA Patch URL | 

[GitHub] [hadoop] hadoop-yetus commented on issue #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
hadoop-yetus commented on issue #670: HDDS-1347. In OM HA getS3Secret call 
Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#issuecomment-523675019
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 614 | trunk passed |
   | +1 | compile | 349 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 797 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 612 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 536 | the patch passed |
   | +1 | compile | 357 | the patch passed |
   | +1 | cc | 357 | the patch passed |
   | +1 | javac | 357 | the patch passed |
   | +1 | checkstyle | 64 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 632 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 96 | hadoop-ozone generated 1 new + 26 unchanged - 0 fixed 
= 27 total (was 26) |
   | +1 | findbugs | 665 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3436 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 9101 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.dn.scrubber.TestDataScrubber |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/670 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 1cbd7ceb0ff2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8fc6567 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/6/testReport/ |
   | Max. process+thread count | 4302 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-670/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16525) LDAP group mapping should include primary posix group

2019-08-21 Thread Todd Lipcon (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912735#comment-16912735
 ] 

Todd Lipcon commented on HADOOP-16525:
--

Worth noting that this case isn't handled properly by the "isPosix" path 
currently in the code. Namely, with FreeIPA, the 'member' attributes of the 
groups refer to the user by DN rather than by UID.

Regarding the "primary group" issue, it already seems like there's some bugs 
here in that I don't think LDAP guarantees any ordering of its results, so for 
the existing ID-based POSIX path we don't return the primary one first. 
[~liuml07] [~giovanni.fumarola] [~lukmajercak] [~dapengsun] it looks like you 
folks may have worked on this code most recently. Mind giving your thoughts on 
this path?

> LDAP group mapping should include primary posix group
> -
>
> Key: HADOOP-16525
> URL: https://issues.apache.org/jira/browse/HADOOP-16525
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Attachments: hadoop-16525.txt
>
>
> When configuring LdapGroupsMapping against FreeIPA, the current 
> implementation searches for groups which have the user listed as a member. 
> This catches all "secondary" groups but misses the user's primary group 
> (typically the same name as their username). We should include a search for a 
> group matching the user's primary gidNumber in the group search.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #1130: HDDS-1827. Load Snapshot info when OM Ratis server starts.

2019-08-21 Thread GitBox
hanishakoneru commented on issue #1130: HDDS-1827. Load Snapshot info when OM 
Ratis server starts.
URL: https://github.com/apache/hadoop/pull/1130#issuecomment-523669837
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16525) LDAP group mapping should include primary posix group

2019-08-21 Thread Todd Lipcon (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912726#comment-16912726
 ] 

Todd Lipcon commented on HADOOP-16525:
--

One potential wrinkle in this patch: do we currently consider the _first_ group 
to be "primary" in the unix group mapping? I seem to recall we had some special 
treatment of the first element in the returned list, in which case this patch 
probably should move that primary group to the front before returning results.

Another question is whether this should be on or off by default. In this patch 
it's on by default but for compat reasons maybe best not to do that?

> LDAP group mapping should include primary posix group
> -
>
> Key: HADOOP-16525
> URL: https://issues.apache.org/jira/browse/HADOOP-16525
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Attachments: hadoop-16525.txt
>
>
> When configuring LdapGroupsMapping against FreeIPA, the current 
> implementation searches for groups which have the user listed as a member. 
> This catches all "secondary" groups but misses the user's primary group 
> (typically the same name as their username). We should include a search for a 
> group matching the user's primary gidNumber in the group search.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16525) LDAP group mapping should include primary posix group

2019-08-21 Thread Todd Lipcon (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912723#comment-16912723
 ] 

Todd Lipcon commented on HADOOP-16525:
--

Attached a preliminary patch here -- missing unit tests but I was able to get 
this to work against a FreeIPA. Will follow up later with tests unless someone 
has some time to take this over the finish line (personally a bit swamped at 
the moment)

> LDAP group mapping should include primary posix group
> -
>
> Key: HADOOP-16525
> URL: https://issues.apache.org/jira/browse/HADOOP-16525
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Attachments: hadoop-16525.txt
>
>
> When configuring LdapGroupsMapping against FreeIPA, the current 
> implementation searches for groups which have the user listed as a member. 
> This catches all "secondary" groups but misses the user's primary group 
> (typically the same name as their username). We should include a search for a 
> group matching the user's primary gidNumber in the group search.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16525) LDAP group mapping should include primary posix group

2019-08-21 Thread Todd Lipcon (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-16525:
-
Attachment: hadoop-16525.txt

> LDAP group mapping should include primary posix group
> -
>
> Key: HADOOP-16525
> URL: https://issues.apache.org/jira/browse/HADOOP-16525
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Attachments: hadoop-16525.txt
>
>
> When configuring LdapGroupsMapping against FreeIPA, the current 
> implementation searches for groups which have the user listed as a member. 
> This catches all "secondary" groups but misses the user's primary group 
> (typically the same name as their username). We should include a search for a 
> group matching the user's primary gidNumber in the group search.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16525) LDAP group mapping should include primary posix group

2019-08-21 Thread Todd Lipcon (Jira)
Todd Lipcon created HADOOP-16525:


 Summary: LDAP group mapping should include primary posix group
 Key: HADOOP-16525
 URL: https://issues.apache.org/jira/browse/HADOOP-16525
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Todd Lipcon


When configuring LdapGroupsMapping against FreeIPA, the current implementation 
searches for groups which have the user listed as a member. This catches all 
"secondary" groups but misses the user's primary group (typically the same name 
as their username). We should include a search for a group matching the user's 
primary gidNumber in the group search.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912710#comment-16912710
 ] 

Wei-Chiu Chuang commented on HADOOP-16524:
--

Should it retry immediately upon a loading failure? Say the credential file is 
written partially and the hadoop daemon loads it at the same time, like 
HDFS-14567.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1255: HDDS-1935. Improve the visibility with Ozone Insight tool

2019-08-21 Thread GitBox
hadoop-yetus commented on issue #1255: HDDS-1935. Improve the visibility with 
Ozone Insight tool
URL: https://github.com/apache/hadoop/pull/1255#issuecomment-523656816
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for branch |
   | +1 | mvninstall | 657 | trunk passed |
   | +1 | compile | 388 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 791 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | trunk passed |
   | 0 | spotbugs | 464 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 704 | trunk passed |
   | -0 | patch | 513 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 44 | Maven dependency ordering for patch |
   | +1 | mvninstall | 577 | the patch passed |
   | +1 | compile | 393 | the patch passed |
   | +1 | javac | 393 | the patch passed |
   | -0 | checkstyle | 43 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 26 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 8 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | -1 | findbugs | 429 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 304 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2388 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 8502 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs |
   | uname | Linux 89c76896c261 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 217e748 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/5/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/5/testReport/ |
   | Max. process+thread count | 4673 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/config hadoop-hdds/framework 
hadoop-hdds/server-scm hadoop-ozone hadoop-ozone/common hadoop-ozone/dist 
hadoop-ozone/insight hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/5/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912685#comment-16912685
 ] 

Hadoop QA commented on HADOOP-16524:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 24s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16524 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978221/HADOOP-16524.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8a3d71b0c9b6 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2ae7f44 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16497/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16497/testReport/ |
| Max. process+thread count | 1422 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316390671
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2683,6 +2683,14 @@ public void deleteS3Bucket(String s3BucketName) throws 
IOException {
* {@inheritDoc}
*/
   public S3SecretValue getS3Secret(String kerberosID) throws IOException{
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+
+// Check whether user name passed is matching with the current user or not.
 
 Review comment:
   This was based on some related review comments when original change for 
getS3Secret is implemented.
   So, I implemented it as part of this.
   
   As without this, some other user can get secrets for other users's.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316390872
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.security;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.S3SecretValue;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.security.S3GetSecretResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateGetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.S3Secret;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_SECRET_LOCK;
+
+/**
+ * Handles GetS3Secret request.
+ */
+public class S3GetSecretRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3GetSecretRequest.class);
+
+  public S3GetSecretRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+GetS3SecretRequest s3GetSecretRequest =
+getOmRequest().getGetS3SecretRequest();
+
+// Generate S3 Secret to be used by OM quorum.
+String kerberosID = s3GetSecretRequest.getKerberosID();
+
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+if (!user.getUserName().equals(kerberosID)) {
+  throw new OMException("User mismatch. Requested user name is " +
+  "mismatched " + kerberosID +", with current user " +
+  user.getUserName(), OMException.ResultCodes.USER_MISMATCH);
+}
+
+String s3Secret = DigestUtils.sha256Hex(OmUtils.getSHADigest());
+
+UpdateGetS3SecretRequest updateGetS3SecretRequest =
+UpdateGetS3SecretRequest.newBuilder()
+.setAwsSecret(s3Secret)
+.setKerberosID(kerberosID).build();
+
+// Client issues GetS3Secret request, when received by OM leader
+// it will generate s3Secret. Original GetS3Secret request is
+// converted to UpdateGetS3Secret request with the generated token
+// information. This updated request will be submitted to Ratis. In this
+// way S3Secret created by leader, will be replicated across all
+// OMs. With this approach, original GetS3Secret request from
+// client does not need any proto changes.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdateGetS3SecretRequest(updateGetS3SecretRequest)
+.setCmdType(getOmRequest().getCmdType())
+.setClientId(getOmRequest().getClientId());
+
+if 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.

2019-08-21 Thread GitBox
xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update 
documentation for 0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331#discussion_r316397152
 
 

 ##
 File path: hadoop-hdds/docs/content/security/SecurityAcls.md
 ##
 @@ -63,23 +63,20 @@ volume and keys in a bucket. Please note: Under Ozone, 
Only admins can create vo
  to the volume and buckets which allow listing of the child objects. Please 
note: The user and admins can list the volumes owned by the user.
 3. **Delete** – Allows the user to delete a volume, bucket or key.
 4. **Read** – Allows the user to read the metadata of a Volume and Bucket and
-data stream and metadata of a key(object).
+data stream and metadata of a key.
 5. **Write** - Allows the user to write the metadata of a Volume and Bucket and
-allows the user to overwrite an existing ozone key(object).
+allows the user to overwrite an existing ozone key.
 6. **Read_ACL** – Allows a user to read the ACL on a specific object.
 7. **Write_ACL** – Allows a user to write the ACL on a specific object.
 
-Ozone Native ACL APIs Work in
-progress
+Ozone Native ACL APIs
 
 The ACLs can be manipulated by a set of APIs supported by Ozone. The APIs
 supported are:
 
-1. **SetAcl** – This API will take user principal, the name of the object, type
- of the object and a list of ACLs.
-
-2. **GetAcl** – This API will take the name of an ozone object and type of the
-object and will return a list of ACLs.
-3. **RemoveAcl** - It is possible that we might support an API called RemoveACL
- as a convenience API, but in reality it is just a GetACL followed by SetACL
- with an etag to avoid conflicts.
+1. **SetAcl** – This API will take user principal, the name, type
+of the ozone object and a list of ACLs.
+2. **GetAcl** – This API will take the name and type of the ozone object
+and will return a list of ACLs.
+3. **RemoveAcl** - This API will take the name, type of the
 
 Review comment:
   I think we miss the following here:
**AddAcl** - This API will take the user principal, name type of ozone 
object and an ozone ACL, and add it to existing acls of the ozone object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316390872
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.security;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.S3SecretValue;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.security.S3GetSecretResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateGetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.S3Secret;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_SECRET_LOCK;
+
+/**
+ * Handles GetS3Secret request.
+ */
+public class S3GetSecretRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3GetSecretRequest.class);
+
+  public S3GetSecretRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+GetS3SecretRequest s3GetSecretRequest =
+getOmRequest().getGetS3SecretRequest();
+
+// Generate S3 Secret to be used by OM quorum.
+String kerberosID = s3GetSecretRequest.getKerberosID();
+
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+if (!user.getUserName().equals(kerberosID)) {
+  throw new OMException("User mismatch. Requested user name is " +
+  "mismatched " + kerberosID +", with current user " +
+  user.getUserName(), OMException.ResultCodes.USER_MISMATCH);
+}
+
+String s3Secret = DigestUtils.sha256Hex(OmUtils.getSHADigest());
+
+UpdateGetS3SecretRequest updateGetS3SecretRequest =
+UpdateGetS3SecretRequest.newBuilder()
+.setAwsSecret(s3Secret)
+.setKerberosID(kerberosID).build();
+
+// Client issues GetS3Secret request, when received by OM leader
+// it will generate s3Secret. Original GetS3Secret request is
+// converted to UpdateGetS3Secret request with the generated token
+// information. This updated request will be submitted to Ratis. In this
+// way S3Secret created by leader, will be replicated across all
+// OMs. With this approach, original GetS3Secret request from
+// client does not need any proto changes.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdateGetS3SecretRequest(updateGetS3SecretRequest)
+.setCmdType(getOmRequest().getCmdType())
+.setClientId(getOmRequest().getClientId());
+
+if 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316390872
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.security;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.S3SecretValue;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.security.S3GetSecretResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateGetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.S3Secret;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_SECRET_LOCK;
+
+/**
+ * Handles GetS3Secret request.
+ */
+public class S3GetSecretRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3GetSecretRequest.class);
+
+  public S3GetSecretRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+GetS3SecretRequest s3GetSecretRequest =
+getOmRequest().getGetS3SecretRequest();
+
+// Generate S3 Secret to be used by OM quorum.
+String kerberosID = s3GetSecretRequest.getKerberosID();
+
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+if (!user.getUserName().equals(kerberosID)) {
+  throw new OMException("User mismatch. Requested user name is " +
+  "mismatched " + kerberosID +", with current user " +
+  user.getUserName(), OMException.ResultCodes.USER_MISMATCH);
+}
+
+String s3Secret = DigestUtils.sha256Hex(OmUtils.getSHADigest());
+
+UpdateGetS3SecretRequest updateGetS3SecretRequest =
+UpdateGetS3SecretRequest.newBuilder()
+.setAwsSecret(s3Secret)
+.setKerberosID(kerberosID).build();
+
+// Client issues GetS3Secret request, when received by OM leader
+// it will generate s3Secret. Original GetS3Secret request is
+// converted to UpdateGetS3Secret request with the generated token
+// information. This updated request will be submitted to Ratis. In this
+// way S3Secret created by leader, will be replicated across all
+// OMs. With this approach, original GetS3Secret request from
+// client does not need any proto changes.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdateGetS3SecretRequest(updateGetS3SecretRequest)
+.setCmdType(getOmRequest().getCmdType())
+.setClientId(getOmRequest().getClientId());
+
+if 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316390872
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.security;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.S3SecretValue;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.security.S3GetSecretResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateGetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.S3Secret;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_SECRET_LOCK;
+
+/**
+ * Handles GetS3Secret request.
+ */
+public class S3GetSecretRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3GetSecretRequest.class);
+
+  public S3GetSecretRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+GetS3SecretRequest s3GetSecretRequest =
+getOmRequest().getGetS3SecretRequest();
+
+// Generate S3 Secret to be used by OM quorum.
+String kerberosID = s3GetSecretRequest.getKerberosID();
+
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+if (!user.getUserName().equals(kerberosID)) {
+  throw new OMException("User mismatch. Requested user name is " +
+  "mismatched " + kerberosID +", with current user " +
+  user.getUserName(), OMException.ResultCodes.USER_MISMATCH);
+}
+
+String s3Secret = DigestUtils.sha256Hex(OmUtils.getSHADigest());
+
+UpdateGetS3SecretRequest updateGetS3SecretRequest =
+UpdateGetS3SecretRequest.newBuilder()
+.setAwsSecret(s3Secret)
+.setKerberosID(kerberosID).build();
+
+// Client issues GetS3Secret request, when received by OM leader
+// it will generate s3Secret. Original GetS3Secret request is
+// converted to UpdateGetS3Secret request with the generated token
+// information. This updated request will be submitted to Ratis. In this
+// way S3Secret created by leader, will be replicated across all
+// OMs. With this approach, original GetS3Secret request from
+// client does not need any proto changes.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdateGetS3SecretRequest(updateGetS3SecretRequest)
+.setCmdType(getOmRequest().getCmdType())
+.setClientId(getOmRequest().getClientId());
+
+if 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.

2019-08-21 Thread GitBox
xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update 
documentation for 0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331#discussion_r316390840
 
 

 ##
 File path: hadoop-hdds/docs/content/security/SecuringTDE.md
 ##
 @@ -22,20 +22,19 @@ icon: lock
   limitations under the License.
 -->
 
-## Transparent Data Encryption
 Ozone TDE setup process and usage are very similar to HDFS TDE.
 The major difference is that Ozone TDE is enabled at Ozone bucket level
 when a bucket is created.
 
 ### Setting up the Key Management Server
 
-To use TDE, clients must setup a Key Management server and provide that URI to
+To use TDE, clients must setup a Key Management Server and provide that URI to
 Ozone/HDFS. Since Ozone and HDFS can use the same Key Management Server, this
  configuration can be provided via *hdfs-site.xml*.
 
 Review comment:
   I think this should be core-site.xml instead of hdfs-site.xml as this is a 
hadoop configuration. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
bharatviswa504 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316390671
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2683,6 +2683,14 @@ public void deleteS3Bucket(String s3BucketName) throws 
IOException {
* {@inheritDoc}
*/
   public S3SecretValue getS3Secret(String kerberosID) throws IOException{
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+
+// Check whether user name passed is matching with the current user or not.
 
 Review comment:
   This was based on some related review comments when original change for 
getS3Secret is implemented.
   So, I implemented it as part of this.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-08-21 Thread GitBox
hadoop-yetus commented on issue #1332: HADOOP-16445. Allow separate custom 
signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-523637550
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 97 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1454 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 820 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 31 | trunk passed |
   | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 70 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 45 | the patch passed |
   | +1 | compile | 36 | the patch passed |
   | +1 | javac | 36 | the patch passed |
   | +1 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 0 new 
+ 33 unchanged - 1 fixed = 33 total (was 34) |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 851 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 71 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 86 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3865 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1332 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0fb3ea60aedb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2ae7f44 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/1/testReport/ |
   | Max. process+thread count | 434 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316387503
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
 ##
 @@ -0,0 +1,193 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.security;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import com.google.common.base.Optional;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.S3SecretValue;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.s3.security.S3GetSecretResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.GetS3SecretResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.UpdateGetS3SecretRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.S3Secret;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_SECRET_LOCK;
+
+/**
+ * Handles GetS3Secret request.
+ */
+public class S3GetSecretRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3GetSecretRequest.class);
+
+  public S3GetSecretRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+GetS3SecretRequest s3GetSecretRequest =
+getOmRequest().getGetS3SecretRequest();
+
+// Generate S3 Secret to be used by OM quorum.
+String kerberosID = s3GetSecretRequest.getKerberosID();
+
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+if (!user.getUserName().equals(kerberosID)) {
+  throw new OMException("User mismatch. Requested user name is " +
+  "mismatched " + kerberosID +", with current user " +
+  user.getUserName(), OMException.ResultCodes.USER_MISMATCH);
+}
+
+String s3Secret = DigestUtils.sha256Hex(OmUtils.getSHADigest());
+
+UpdateGetS3SecretRequest updateGetS3SecretRequest =
+UpdateGetS3SecretRequest.newBuilder()
+.setAwsSecret(s3Secret)
+.setKerberosID(kerberosID).build();
+
+// Client issues GetS3Secret request, when received by OM leader
+// it will generate s3Secret. Original GetS3Secret request is
+// converted to UpdateGetS3Secret request with the generated token
+// information. This updated request will be submitted to Ratis. In this
+// way S3Secret created by leader, will be replicated across all
+// OMs. With this approach, original GetS3Secret request from
+// client does not need any proto changes.
+OMRequest.Builder omRequest = OMRequest.newBuilder()
+.setUserInfo(getUserInfo())
+.setUpdateGetS3SecretRequest(updateGetS3SecretRequest)
+.setCmdType(getOmRequest().getCmdType())
+.setClientId(getOmRequest().getClientId());
+
+if 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.

2019-08-21 Thread GitBox
xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update 
documentation for 0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331#discussion_r316385562
 
 

 ##
 File path: hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md
 ##
 @@ -64,18 +64,22 @@ Create a new directory for customizing the created docker 
image.
 
 Copy the `ozone-site.xml` from the cluster:
 
-```
+```bash
 kubectl cp om-0:/opt/hadoop/etc/hadoop/ozone-site.xml .
 ```
 
-And create a custom `core-site.xml`:
+And create a custom `core-site.xml`.
 
-```
+```xml
 
 
 fs.o3fs.impl
 org.apache.hadoop.fs.ozone.BasicOzoneFileSystem
 
+
+fs.AbstractFileSystem.o3fs.impl
+org.apache.hadoop.fs.ozone.OzFs
 
 Review comment:
   This should be BasicOzFs to match with BasicOzoneFileSystem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.

2019-08-21 Thread GitBox
xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update 
documentation for 0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331#discussion_r316384457
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/S3.md
 ##
 @@ -116,7 +116,7 @@ aws s3api --endpoint http://localhost:9878 create-bucket 
--bucket bucket1
 
 To show the storage location of a S3 bucket, use the `ozone s3 path 
` command.
 
-```
+```bash
 aws s3api --endpoint-url http://localhost:9878 create-bucket --bucket=bucket1
 
 ozone s3 path bucket1
 
 Review comment:
   Can you confirm the output volume name is "s3thisisakey"?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
arp7 commented on a change in pull request #670: HDDS-1347. In OM HA 
getS3Secret call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#discussion_r316384114
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2683,6 +2683,14 @@ public void deleteS3Bucket(String s3BucketName) throws 
IOException {
* {@inheritDoc}
*/
   public S3SecretValue getS3Secret(String kerberosID) throws IOException{
+UserGroupInformation user = ProtobufRpcEngine.Server.getRemoteUser();
+
+// Check whether user name passed is matching with the current user or not.
 
 Review comment:
   Why is this new check required?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.

2019-08-21 Thread GitBox
xiaoyuyao commented on a change in pull request #1331: HDDS-2002. Update 
documentation for 0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331#discussion_r316383515
 
 

 ##
 File path: hadoop-hdds/docs/content/interface/S3.md
 ##
 @@ -93,7 +93,7 @@ If security is not enabled, you can *use* **any** 
AWS_ACCESS_KEY_ID and AWS_SECR
 
 If security is enabled, you can get the key and the secret with the `ozone s3 
getsecret` command (*kerberos based authentication is required).
 
-```
+```bash
 /etc/security/keytabs/testuser.keytab testuser/s...@example.com
 
 Review comment:
   This should be "kinit -kt /etc/security/keytabs/testuser.keytab 
testuser/s...@example.com"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1331: HDDS-2002. Update documentation for 0.4.1 release.

2019-08-21 Thread GitBox
hadoop-yetus commented on issue #1331: HDDS-2002. Update documentation for 
0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331#issuecomment-523625247
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 610 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1422 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 531 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 2850 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1331/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1331 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux 0fe256b79983 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 217e748 |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1331/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1180: HDDS-1871. Remove anti-affinity rules from k8s minkube example

2019-08-21 Thread GitBox
anuengineer closed pull request #1180: HDDS-1871. Remove anti-affinity rules 
from k8s minkube example
URL: https://github.com/apache/hadoop/pull/1180
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
adoroszlai commented on issue #1327: HDDS-1999. Basic acceptance test and 
SCM/OM web UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327#issuecomment-523611418
 
 
   Thanks @bharatviswa504 for review/commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
bharatviswa504 merged pull request #1327: HDDS-1999. Basic acceptance test and 
SCM/OM web UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
adoroszlai commented on issue #1327: HDDS-1999. Basic acceptance test and 
SCM/OM web UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327#issuecomment-523606639
 
 
   @elek @vivekratnavel breaking the dependency is fine, but can we please get 
rid of the acceptance test failures?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16438) Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1

2019-08-21 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-16438:
---
Attachment: HADOOP-16438.001.patch

> Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1
> ---
>
> Key: HADOOP-16438
> URL: https://issues.apache.org/jira/browse/HADOOP-16438
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/adl
>Affects Versions: 2.9.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
> Attachments: HADOOP-16438.001.patch, HADOOP-16438.patch
>
>
> Currently there is no user control possible on the SSL channel mode used for 
> server connections. It will try to connect using SSLChannelMode.OpenSSL and 
> default to SSLChannelMode.Default_JSE when there is any issue. 
> A new config is needed to toggle the choice if any issues are observed with 
> OpenSSL. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1316: HDDS-1973. Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.

2019-08-21 Thread GitBox
bharatviswa504 merged pull request #1316: HDDS-1973. Implement OM 
RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1316: HDDS-1973. Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.

2019-08-21 Thread GitBox
bharatviswa504 commented on issue #1316: HDDS-1973. Implement OM 
RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#issuecomment-523604746
 
 
   Test failures are not related to this patch.
   Thank You @arp7 and @xiaoyuyao for the review.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #670: HDDS-1347. In OM HA getS3Secret call Should happen only leader OM.

2019-08-21 Thread GitBox
bharatviswa504 commented on issue #670: HDDS-1347. In OM HA getS3Secret call 
Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/670#issuecomment-523603281
 
 
   Rebased this. It is ready for review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912590#comment-16912590
 ] 

Kihwal Lee edited comment on HADOOP-16524 at 8/21/19 6:37 PM:
--

This does not cover DataNode, since its front-end is netty-based. The 
HttpServer2/jetty based server is internal. Unlike HttpServer2, the netty-based 
DatanodeHttpServer still uses SSLFactory. We have internally modified 
SSLFactory to enable automatic reloading of cert.  This will also make secure 
mapreduce shuffle server to reload cert.  I can add it to this patch if people 
are interested. We have used it for several years in production.


was (Author: kihwal):
This does not cover DataNode, since its front-end is netty-based. The 
HttpServer2/jetty based server is internal. Unlike HttpServer2, the netty-based 
DatanodeHttpServer still uses SSLFactory. We have internally modified 
SSLFactory to enable automatic reloading of cert.  This will also make secure 
mapreduce shuffle server to reload cert.  I can add it to this patch if people 
are interested.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16445) Allow separate custom signing algorithms for S3 and DDB

2019-08-21 Thread Siddharth Seth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912592#comment-16912592
 ] 

Siddharth Seth commented on HADOOP-16445:
-

Have posted a PR.

On the STS question - at the moment, it is going to end up using the existing 
configuration parameter - i.e. fs.s3a.signing-algorithm - and the overrides for 
S3A/DDB will not have an affect on this. I could add an override for STS as 
well if that makes sense.

For STS - if fs.s3a.signing-algorithm is not set, the signer is not overridden.
For S3 - If fs.s3a.s3.signing-algorithm is set - the signer is overriden with 
this value. Otherwise the existing behaviour continues (similar to what is 
decsribed for STS above)
For DDB -  If fs.s3a.ddb.signing-algorithm is set - the signer is overriden 
with this value. Otherwise the existing behaviour continues (similar to what is 
decsribed for STS above)

> Allow separate custom signing algorithms for S3 and DDB
> ---
>
> Key: HADOOP-16445
> URL: https://issues.apache.org/jira/browse/HADOOP-16445
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Attachments: HADOOP-16445.01.patch, HADOOP-16445.02.patch
>
>
> fs.s3a.signing-algorithm allows overriding the signer. This applies to both 
> the S3 and DDB clients. Need to be able to specify separate signing algorithm 
> overrides for S3 and DDB.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912590#comment-16912590
 ] 

Kihwal Lee commented on HADOOP-16524:
-

This does not cover DataNode, since its front-end is netty-based. The 
HttpServer2/jetty based server is internal. Unlike HttpServer2, the netty-based 
DatanodeHttpServer still uses SSLFactory. We have internally modified 
SSLFactory to enable automatic reloading of cert.  This will also make secure 
mapreduce shuffle server to reload cert.  I can add it to this patch if people 
are interested.

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16524:

Assignee: Kihwal Lee
  Status: Patch Available  (was: Open)

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16524:

Description: Jetty 9 simplified reloading of keystore.   This allows hadoop 
daemon's SSL cert to be updated in place without having to restart the service. 
 (was: Jetty 9 simplified reloading of keystore. )

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16524:

Attachment: HADOOP-16524.patch

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16524.patch
>
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth opened a new pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-08-21 Thread GitBox
sidseth opened a new pull request #1332: HADOOP-16445. Allow separate custom 
signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332
 
 
   This is very similar to the original patch.
   In terms of testing - have run tests against a bucket in us-east-1 
(including with the patch posted on HADOOP-16477). Struggling a bit with 
failures though, which seem completely unrelated to the patch. Still trying to 
get my test configuration file to a state where tests pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
vivekratnavel commented on issue #1327: HDDS-1999. Basic acceptance test and 
SCM/OM web UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327#issuecomment-523591186
 
 
   I agree with @elek. We should have a copy of all the static dependencies 
inside HDDS and not depend on HDFS.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)
Kihwal Lee created HADOOP-16524:
---

 Summary: Automatic keystore reloading for HttpServer2
 Key: HADOOP-16524
 URL: https://issues.apache.org/jira/browse/HADOOP-16524
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee


Jetty 9 simplified reloading of keystore. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912585#comment-16912585
 ] 

Kihwal Lee commented on HADOOP-16517:
-

Added a support for YARN. Tested on a small cluster.

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.1.patch, HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16517) Allow optional mutual TLS in HttpServer2

2019-08-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-16517:

Attachment: HADOOP-16517.1.patch

> Allow optional mutual TLS in HttpServer2
> 
>
> Key: HADOOP-16517
> URL: https://issues.apache.org/jira/browse/HADOOP-16517
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HADOOP-16517.1.patch, HADOOP-16517.patch
>
>
> Currently the webservice can enforce mTLS by setting 
> "dfs.client.https.need-auth" on the server side. (The config name is 
> misleading, as it is actually server-side config. It has been deprecated from 
> the client config)  A hadoop client can talk to mTLS enforced web service by 
> setting "hadoop.ssl.require.client.cert" with proper ssl config.
> We have seen use case where mTLS needs to be enabled optionally for only 
> those clients who supplies their cert. In a mixed environment like this, 
> individual services may still enforce mTLS for a subset of endpoints by 
> checking the existence of x509 cert in the request.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1328: HDDS-1998. TestSecureContainerServer#testClientServerRatisGrpc is fai…

2019-08-21 Thread GitBox
adoroszlai commented on issue #1328: HDDS-1998. 
TestSecureContainerServer#testClientServerRatisGrpc is fai…
URL: https://github.com/apache/hadoop/pull/1328#issuecomment-523584365
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.

2019-08-21 Thread GitBox
nandakumar131 opened a new pull request #1331: HDDS-2002. Update documentation 
for 0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek edited a comment on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
elek edited a comment on issue #1327: HDDS-1999. Basic acceptance test and 
SCM/OM web UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327#issuecomment-523559049
 
 
   HDDS-2000 is also related but I would like to remove the dependencies 
between HDFS and HDDS web pages.
   
   But we can separate the version bump (this jira) and the resource copy 
(HDDS-2000)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
elek commented on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web 
UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327#issuecomment-523559049
 
 
   HDDS-2000 is also related but I would like to remove the dependencies 
between HDFS and HDDS web pages.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
adoroszlai commented on issue #1327: HDDS-1999. Basic acceptance test and 
SCM/OM web UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327#issuecomment-523557696
 
 
   @vivekratnavel @elek please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xkrogen commented on issue #1320: HDFS-14755. [Dynamometer] Hadoop-2 DataNode fail to start

2019-08-21 Thread GitBox
xkrogen commented on issue #1320: HDFS-14755. [Dynamometer] Hadoop-2 DataNode 
fail to start
URL: https://github.com/apache/hadoop/pull/1320#issuecomment-523549105
 
 
   Can you explain why it works? This is the code `MiniDFSCluster` uses to 
parse it (within `GenericTestUtils`):
   ```
 public static File getTestDir() {
   String prop = System.getProperty(SYSPROP_TEST_DATA_DIR, 
DEFAULT_TEST_DATA_DIR);
   if (prop.isEmpty()) {
 // corner case: property is there but empty
 prop = DEFAULT_TEST_DATA_DIR;
   }
   File dir = new File(prop).getAbsoluteFile();
   ```
   (`SYSPROP_TEST_DATA_DIR` is the same as `PROP_TEST_BUILD_DATA`)
   
   I would expect `new File(prop)` to complain if you give it a comma-separated 
list of directories.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1316: HDDS-1973. Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.

2019-08-21 Thread GitBox
xiaoyuyao commented on issue #1316: HDDS-1973. Implement OM 
RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#issuecomment-523547291
 
 
   +1 from me as well. Seems the CI is broken pending fix from HDDS-1999/2000.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-08-21 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912498#comment-16912498
 ] 

Erik Krogen commented on HADOOP-16268:
--

Great [~crh]! Thanks for the new changes. I have some small comments, mostly on 
the tests.
 
* For the comment on {{IPC_CALLQUEUE_SERVER_FAILOVER_ENABLE}}, it seems that 
there is already a block of similar configs above with a general comment 
explaining how the namespacing works. If we push this key into that same block, 
I think we can remove this comment and rely on that one?
* My IDE gives me a few warnings about {{testInsertionWithFailover}}:
** {{Exception}} is never thrown; you can remove the {{throws}}
** You can use diamond-typing for {{new FairCallQueue<>()}}
** {{p2}} is never used
* {{testInsertionWithFailover}} is great, but a bit long. Can we refactor a 
method like:
{code}
private void addToQueueAndVerify(Schedulable call, int expectedQueue0, int 
expectedQueue1, int expectedQueue2) {
  Mockito.reset(fcq);
  fcq.add(call);
  Mockito.verify(fcq, times(expectedQueue0)).offerQueue(0, call);
  Mockito.verify(fcq, times(expectedQueue1)).offerQueue(1, call);
  Mockito.verify(fcq, times(expectedQueue2)).offerQueue(2, call);
}
{code}

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch, HADOOP-16268.002.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1316: HDDS-1973. Implement OM RenewDelegationToken request to use Cache and DoubleBuffer.

2019-08-21 Thread GitBox
bharatviswa504 commented on issue #1316: HDDS-1973. Implement OM 
RenewDelegationToken request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1316#issuecomment-523542524
 
 
   @xiaoyuyao Can I proceed with commit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-21 Thread GitBox
bharatviswa504 commented on issue #1263: HDDS-1927. Consolidate add/remove Acl 
into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#issuecomment-523541572
 
 
   +1. Once can you verify test results?
For acceptance, it is opening in GitHub, so not able to verify the results. 
IT failures mostly looks unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-21 Thread GitBox
xiaoyuyao commented on a change in pull request #1263: HDDS-1927. Consolidate 
add/remove Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1263#discussion_r316275771
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
 ##
 @@ -235,123 +231,22 @@ public FileEncryptionInfo getFileEncryptionInfo() {
 return encInfo;
   }
 
-  public List getAcls() {
+  public List getAcls() {
 
 Review comment:
   Sounds good to me. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] XBaith commented on issue #1325: Fix a typo in TextInputFormat.java

2019-08-21 Thread GitBox
XBaith commented on issue #1325: Fix a typo in TextInputFormat.java
URL: https://github.com/apache/hadoop/pull/1325#issuecomment-523523575
 
 
   Very thankful.I didn't find the right way until you told me. 
   下次我会注意的,谢谢


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1327: HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread GitBox
hadoop-yetus commented on issue #1327: HDDS-1999. Basic acceptance test and 
SCM/OM web UI broken by Bootstrap upgrade
URL: https://github.com/apache/hadoop/pull/1327#issuecomment-523514702
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 101 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 635 | trunk passed |
   | +1 | compile | 362 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1809 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 576 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 292 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1938 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6339 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1327/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1327 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient |
   | uname | Linux 82563191e4ef 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e684b17 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1327/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1327/1/testReport/ |
   | Max. process+thread count | 3843 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/dist 
hadoop-ozone/ozone-manager hadoop-ozone/s3gateway U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1327/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should provide the INode type.

2019-08-21 Thread GitBox
hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should 
provide the INode type.
URL: https://github.com/apache/hadoop/pull/1313#issuecomment-523513326
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1219 | trunk passed |
   | +1 | compile | 221 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 161 | trunk passed |
   | +1 | shadedclient | 1579 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 303 | trunk passed |
   | 0 | spotbugs | 412 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 739 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for patch |
   | +1 | mvninstall | 314 | the patch passed |
   | +1 | compile | 456 | the patch passed |
   | +1 | cc | 456 | the patch passed |
   | +1 | javac | 456 | the patch passed |
   | -0 | checkstyle | 125 | hadoop-hdfs-project: The patch generated 2 new + 
376 unchanged - 6 fixed = 378 total (was 382) |
   | +1 | mvnsite | 327 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 1017 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 81 | the patch passed |
   | +1 | findbugs | 323 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 119 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 5284 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 12612 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1313 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 0f63838fb8c9 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e684b17 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/5/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/5/testReport/ |
   | Max. process+thread count | 3461 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viczsaurav commented on a change in pull request #1280: HADOOP-16505. Add ability to register custom signer with AWS SignerFactory

2019-08-21 Thread GitBox
viczsaurav commented on a change in pull request #1280: HADOOP-16505. Add 
ability to register custom signer with AWS SignerFactory
URL: https://github.com/apache/hadoop/pull/1280#discussion_r316255348
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ASigningAlgorithmOverride.java
 ##
 @@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import com.amazonaws.ClientConfiguration;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+import java.util.Objects;
+
+
+/**
+ * Test whether or not Custom Signing Algorithm Override works by turning it 
on.
+ */
+public class TestS3ASigningAlgorithmOverride extends AbstractS3ATestBase {
+
+  @Override
+  protected Configuration createConfiguration() {
+Configuration conf = super.createConfiguration();
+conf.set(Constants.SIGNING_ALGORITHM,
+S3ATestConstants.S3A_SIGNING_ALGORITHM);
+LOG.debug("Inside createConfiguration...");
+return conf;
+  }
+
+  @Test
+  public void testCustomSignerOverride() throws AssertionError {
+LOG.debug("Inside createConfiguration...");
+assertTrue(assertIsCustomSignerLoaded());
 
 Review comment:
   Refactored the code


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1310: HDDS-1978. Create helper script to run blockade tests.

2019-08-21 Thread GitBox
elek commented on issue #1310: HDDS-1978. Create helper script to run blockade 
tests.
URL: https://github.com/apache/hadoop/pull/1310#issuecomment-523496415
 
 
   Looks good to me. Let me test it with starting the blockade test on the CI 
server. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should provide the INode type.

2019-08-21 Thread GitBox
hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should 
provide the INode type.
URL: https://github.com/apache/hadoop/pull/1313#issuecomment-523493279
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1174 | trunk passed |
   | +1 | compile | 223 | trunk passed |
   | +1 | checkstyle | 57 | trunk passed |
   | +1 | mvnsite | 110 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 86 | trunk passed |
   | 0 | spotbugs | 177 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 314 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 110 | the patch passed |
   | +1 | compile | 207 | the patch passed |
   | +1 | cc | 207 | the patch passed |
   | +1 | javac | 207 | the patch passed |
   | -0 | checkstyle | 50 | hadoop-hdfs-project: The patch generated 2 new + 
377 unchanged - 6 fixed = 379 total (was 383) |
   | +1 | mvnsite | 103 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 71 | the patch passed |
   | +1 | findbugs | 291 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 112 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 4924 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 9518 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDFSStripedInputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1313 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 03e23d8485d2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e684b17 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/6/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/6/testReport/ |
   | Max. process+thread count | 4325 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #1330: HDDS-2000. Don't depend on bootstrap/jquery versions from hadoop-trunk snapshot

2019-08-21 Thread GitBox
elek opened a new pull request #1330: HDDS-2000. Don't depend on 
bootstrap/jquery versions from hadoop-trunk snapshot
URL: https://github.com/apache/hadoop/pull/1330
 
 
   The OM/SCM web pages are broken due to the upgrade in HDFS-14729 (which is a 
great patch on the Hadoop side). To have more stability I propose to use our 
own instance from jquery/bootstrap instead of copying the actual version from 
hadoop trunk which is a SNAPSHOT build.
   
   See: https://issues.apache.org/jira/browse/HDDS-2000


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] supratimdeka commented on a change in pull request #1319: HDDS-1981: Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-21 Thread GitBox
supratimdeka commented on a change in pull request #1319: HDDS-1981: Datanode 
should sync db when container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319#discussion_r316139740
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
 ##
 @@ -365,6 +375,22 @@ void compactDB() throws StorageContainerException {
 }
   }
 
+  private void flushAndSyncDB() throws StorageContainerException {
+try {
+  try (ReferenceCountedDB db = BlockUtils.getDB(containerData, config)) {
+db.getStore().flushDB(true);
+LOG.info("Container {} is synced with bcsId {}.",
 
 Review comment:
   might be a good idea to move this log out into close() and quasiClose(). So 
that flushAndSyncDB is a utility routine which can, in principle, be invoked 
outside of close/quasiClose.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode unable to find chunk while replication data using ratis.

2019-08-21 Thread GitBox
lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode 
unable to find chunk while replication data using ratis.
URL: https://github.com/apache/hadoop/pull/1318#discussion_r316175405
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -654,4 +646,18 @@ public void handleNodeLogFailure(RaftGroupId groupId, 
Throwable t) {
 triggerPipelineClose(groupId, msg,
 ClosePipelineInfo.Reason.PIPELINE_LOG_FAILED, true);
   }
+
+  public long getMinReplicatedIndex(PipelineID pipelineID) throws IOException {
+Long minIndex = null;
+Iterator raftGroupIterator = getServer().getGroups().iterator();
+while (raftGroupIterator.hasNext()) {
 
 Review comment:
   We can directly use the getServer().getGroupInfo(..) api here and do not 
need the while loop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode unable to find chunk while replication data using ratis.

2019-08-21 Thread GitBox
lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode 
unable to find chunk while replication data using ratis.
URL: https://github.com/apache/hadoop/pull/1318#discussion_r316030192
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
 ##
 @@ -236,14 +244,43 @@ public ContainerReportsProto getContainerReport() throws 
IOException {
   ContainerDeletionChoosingPolicy deletionPolicy)
   throws StorageContainerException {
 Map containerDataMap = 
containerMap.entrySet().stream()
-.filter(e -> deletionPolicy.isValidContainerType(
-e.getValue().getContainerType()))
-.collect(Collectors.toMap(Map.Entry::getKey,
-e -> e.getValue().getContainerData()));
+.filter(e ->
+
deletionPolicy.isValidContainerType(e.getValue().getContainerType())
+&& isDeletionAllowed(e.getValue().getContainerData())).collect(
+Collectors.toMap(Map.Entry::getKey,
+e -> e.getValue().getContainerData()));
 return deletionPolicy
 .chooseContainerForBlockDeletion(count, containerDataMap);
   }
 
+  private boolean isDeletionAllowed(ContainerData containerData) {
+if (containerData.isClosed()) {
+  if (writeChannel instanceof XceiverServerRatis) {
+try {
+  XceiverServerRatis ratisServer = (XceiverServerRatis) writeChannel;
+  long minReplicatedIndex = 
ratisServer.getMinReplicatedIndex(PipelineID
+  .valueOf(UUID.fromString(containerData.getOriginPipelineId(;
+  long containerBCSID = containerData.getBlockCommitSequenceId();
+  if (minReplicatedIndex != 0 && minReplicatedIndex < containerBCSID) {
+LOG.info("Close Container lo Index {} is not replicated across all"
++ "the servers in the pipeline {} as the min replicated "
++ "index is {}. Deletion is not allowed in this container "
++ "yet.", containerBCSID, 
containerData.getOriginPipelineId(),
+minReplicatedIndex);
+return false;
+  } else {
+return true;
+  }
+} catch (IOException ioe) {
+  LOG.info(ioe.getMessage());
 
 Review comment:
   This should be LOG.error?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode unable to find chunk while replication data using ratis.

2019-08-21 Thread GitBox
lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode 
unable to find chunk while replication data using ratis.
URL: https://github.com/apache/hadoop/pull/1318#discussion_r316221207
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java
 ##
 @@ -0,0 +1,274 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.client.rpc;
+
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.ozone.HddsDatanodeService;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.io.KeyOutputStream;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.container.ContainerTestHelper;
+import org.apache.hadoop.ozone.container.common.helpers.BlockData;
+import org.apache.hadoop.ozone.container.common.helpers.ChunkInfo;
+import org.apache.hadoop.ozone.container.common.interfaces.Container;
+import 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine;
+import 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.HDDS_SCM_WATCHER_TIMEOUT;
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_PIPELINE_DESTROY_TIMEOUT;
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_STALENODE_INTERVAL;
+
+/**
+ * Tests delete key operation with a slow follower in the datanode
+ * pipeline.
+ */
+public class TestDeleteWithSlowFollower {
+
+  private static MiniOzoneCluster cluster;
+  private static OzoneConfiguration conf;
+  private static OzoneClient client;
+  private static ObjectStore objectStore;
+  private static String volumeName;
+  private static String bucketName;
+  private static String path;
+  private static XceiverClientManager xceiverClientManager;
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+conf = new OzoneConfiguration();
+path = GenericTestUtils
+.getTempPath(TestContainerStateMachineFailures.class.getSimpleName());
+File baseDir = new File(path);
+baseDir.mkdirs();
+
+conf.setTimeDuration(HDDS_CONTAINER_REPORT_INTERVAL, 200,
+TimeUnit.MILLISECONDS);
+conf.setTimeDuration(HDDS_COMMAND_STATUS_REPORT_INTERVAL, 200,
+TimeUnit.MILLISECONDS);
+conf.setTimeDuration(HDDS_SCM_WATCHER_TIMEOUT, 1000, 

[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode unable to find chunk while replication data using ratis.

2019-08-21 Thread GitBox
lokeshj1703 commented on a change in pull request #1318: HDDS-1753. Datanode 
unable to find chunk while replication data using ratis.
URL: https://github.com/apache/hadoop/pull/1318#discussion_r316083784
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
 ##
 @@ -68,11 +68,13 @@
 import org.apache.hadoop.ozone.container.common.impl.ContainerData;
 import org.apache.hadoop.ozone.container.common.interfaces.Container;
 import 
org.apache.hadoop.ozone.container.common.transport.server.XceiverServerSpi;
+import 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine;
 
 Review comment:
   unused import


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >