[jira] [Commented] (HADOOP-16542) Update commons-beanutils version

2019-09-03 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921972#comment-16921972
 ] 

kevin su commented on HADOOP-16542:
---

Thanks [~jojochuang] for the reply, it make sense to remove it from Hadoop 
codebase

updated the patch 

> Update commons-beanutils version
> 
>
> Key: HADOOP-16542
> URL: https://issues.apache.org/jira/browse/HADOOP-16542
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.10.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch
>
>
> [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e]
>  {quote}
> CVE-2019-10086. Apache Commons Beanutils does not suppresses the class 
> property in PropertyUtilsBean
> by default.
> Severity: Medium
> Vendor: The Apache Software Foundation
> Versions Affected: commons-beanutils-1.9.3 and earlier
> Description: A special BeanIntrospector class was added in version 1.9.2.
> This can be used to stop attackers from using the class property of
> Java objects to get access to the classloader.
> However this protection was not enabled by default.
> PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class
> level property access by default, thus protecting against
> CVE-2014-0114.
> Mitigation: 1.X users should migrate to 1.9.4.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16542) Update commons-beanutils version

2019-09-03 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HADOOP-16542:
--
Attachment: HADOOP-16542.002.patch

> Update commons-beanutils version
> 
>
> Key: HADOOP-16542
> URL: https://issues.apache.org/jira/browse/HADOOP-16542
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.10.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch
>
>
> [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e]
>  {quote}
> CVE-2019-10086. Apache Commons Beanutils does not suppresses the class 
> property in PropertyUtilsBean
> by default.
> Severity: Medium
> Vendor: The Apache Software Foundation
> Versions Affected: commons-beanutils-1.9.3 and earlier
> Description: A special BeanIntrospector class was added in version 1.9.2.
> This can be used to stop attackers from using the class property of
> Java objects to get access to the classloader.
> However this protection was not enabled by default.
> PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class
> level property access by default, thus protecting against
> CVE-2014-0114.
> Mitigation: 1.X users should migrate to 1.9.4.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-03 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921957#comment-16921957
 ] 

lqjacklee commented on HADOOP-16543:


I think something need to be changed. 
1, should change IP to alias/dns name
2, should update the alias/dns name to the new IP address ( this part should be 
done in the k8s). 

> Cached DNS name resolution error
> 
>
> Key: HADOOP-16543
> URL: https://issues.apache.org/jira/browse/HADOOP-16543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Roger Liu
>Priority: Major
>
> In Kubernetes, the a node may go down and then come back later with a 
> different IP address. Yarn clients which are already running will be unable 
> to rediscover the node after it comes back up due to caching the original IP 
> address. This is problematic for cases such as Spark HA on Kubernetes, as the 
> node containing the resource manager may go down and come back up, meaning 
> existing node managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320570103
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,13 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
 
 Review comment:
   thanks for info.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread GitBox
dineshchitlangia commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320569509
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,13 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
 
 Review comment:
   @bharatviswa504 GDPR compliance feature does not depend on whether or not 
the cluster is secure. No matter how you set up the cluster, you can enable 
GDPR compliance feature.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320568216
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,13 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
 
 Review comment:
   One question, is GDPR feature is available with/with out security enabled in 
Ozone?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao opened a new pull request #1400: HDDS-2079. Fix TestSecureOzoneManager. Contributed by Xiaoyu Yao.

2019-09-03 Thread GitBox
xiaoyuyao opened a new pull request #1400: HDDS-2079. Fix 
TestSecureOzoneManager. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1400
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320559908
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
+  + " datanodes required. Found %d",
+  nodesRequired, healthyNodes.size());
+  

[jira] [Commented] (HADOOP-16276) Fix jsvc startup command in hadoop-functions.sh due to jsvc >= 1.0.11 changed default current working directory

2019-09-03 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921919#comment-16921919
 ] 

Siyao Meng commented on HADOOP-16276:
-

It seems yetus on this jira thread is attempting to use the old patch (rev 003).

The precommit on PR is good: https://github.com/apache/hadoop/pull/1272

> Fix jsvc startup command in hadoop-functions.sh due to jsvc >= 1.0.11 changed 
> default current working directory
> ---
>
> Key: HADOOP-16276
> URL: https://issues.apache.org/jira/browse/HADOOP-16276
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16276.001.patch, HADOOP-16276.002.patch, 
> HADOOP-16276.003.patch
>
>
> In CDH6, when we bump jsvc from 1.0.10 to 1.1.0 we hit 
> *KerberosAuthException: failure to login / LoginException: Unable to obtain 
> password from user* due to DAEMON-264 and our 
> *dfs.nfs.keytab.file* config uses a relative path. I will probably file 
> another jira to issue a warning like *hdfs.keytab not found* before 
> KerberosAuthException in this case.
> The solution is to add *-cwd $(pwd)* in function hadoop_start_secure_daemon 
> in hadoop-functions.sh but I will have to consider the compatibility with 
> older jsvc versions <= 1.0.10. Will post the patch after I tested it.
> Thanks [~tlipcon] for finding the root cause.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320557920
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
 
 Review comment:
   Improve these comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1377: HDDS-2057. Incorrect Default OM Port in Ozone FS URI Error Message. Contributed by Supratim Deka

2019-09-03 Thread GitBox
arp7 commented on issue #1377: HDDS-2057. Incorrect Default OM Port in Ozone FS 
URI Error Message. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#issuecomment-527722111
 
 
   @bharatviswa504 does the updated patch look okay to you? You had a comment 
on the earlier patch revision.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921910#comment-16921910
 ] 

Jonathan Hung commented on HADOOP-16544:


Thanks for raising this, [~jojochuang], I see there was some good discussion 
here: HADOOP-12928

If we upgrade to 3.10.6Final (or 3.10.5Final as per HADOOP-12928) we may also 
need to upgrade zookeeper to 3.4.9. 

I see HADOOP-12928-branch-2.02.patch never made it to branch-2, we can commit 
that to branch-2 (assuming it still applies).

Thoughts?

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320555713
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
+  + " datanodes required. Found %d",
+  nodesRequired, healthyNodes.size());
+  

[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320555432
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
 
 Review comment:
   container -> pipeline



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320555339
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
 
 Review comment:
   "container" -> pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With 

[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320554141
 
 

 ##
 File path: hadoop-ozone/pom.xml
 ##
 @@ -19,7 +19,7 @@
 3.2.0
 
   
-  hadoop-ozone
+  hadoop-_ozone
 
 Review comment:
   What's this for? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320553505
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 ##
 @@ -199,4 +220,10 @@ void processNodeReport(DatanodeDetails datanodeDetails,
* @return the given datanode, or null if not found
*/
   DatanodeDetails getNodeByAddress(String address);
+
+  /**
+   * Get cluster map as in network topology for this node manager.
+   * @return cluster map
+   */
+  NetworkTopology getClusterMap();
 
 Review comment:
   Suggest to change the name to getClusterNetworkTopologyMap. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921898#comment-16921898
 ] 

Duo Zhang commented on HADOOP-16544:


Is it possible to upgrade them to use netty 4? I think in other places we are 
already using netty 4?

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
ChenSammi commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320552958
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 ##
 @@ -129,6 +138,18 @@
*/
   void removePipeline(Pipeline pipeline);
 
+  /**
+   * Get the entire Node2PipelineMap.
+   * @return Node2PipelineMap
+   */
+  Node2PipelineMap getNode2PipelineMap();
+
+  /**
+   * Set the Node2PipelineMap.
+   * @param node2PipelineMap Node2PipelineMap
+   */
+  void setNode2PipelineMap(Node2PipelineMap node2PipelineMap);
 
 Review comment:
   I would suggest to remove this function from interface definition since it's 
only used in test. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921892#comment-16921892
 ] 

Wei-Chiu Chuang commented on HADOOP-16544:
--

good question ...

Looks like all io.netty classes start with org.jboss.netty. It is mostly used 
in NFS gateway and hadoop-mapreduce-client-shuffle

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921886#comment-16921886
 ] 

Duo Zhang commented on HADOOP-16544:


Just wondering where do we still use netty 3? 

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16544:
-
Labels: release-blocker  (was: )

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>  Labels: release-blocker
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16544:


 Summary: update io.netty in branch-2
 Key: HADOOP-16544
 URL: https://issues.apache.org/jira/browse/HADOOP-16544
 Project: Hadoop Common
  Issue Type: Task
Reporter: Wei-Chiu Chuang


branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.

The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16544) update io.netty in branch-2

2019-09-03 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16544:
-
Priority: Major  (was: Critical)

> update io.netty in branch-2
> ---
>
> Key: HADOOP-16544
> URL: https://issues.apache.org/jira/browse/HADOOP-16544
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> branch-2 pulls in io.netty 3.6.2.Final which is more than 5 years old.
> The latest is 3.10.6Final. I know updating netty is sensitive but it deserves 
> some attention.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on issue #1393: HDDS-2069. Default values of properties hdds.datanode.storage.utilization.{critical | warning}.threshold are not reasonable

2019-09-03 Thread GitBox
ChenSammi commented on issue #1393: HDDS-2069. Default values of properties 
hdds.datanode.storage.utilization.{critical | warning}.threshold are not 
reasonable
URL: https://github.com/apache/hadoop/pull/1393#issuecomment-527712611
 
 
   Thanks @nandakumar131  for the review. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread GitBox
dineshchitlangia commented on issue #1386: HDDS-2015. Encrypt/decrypt key using 
symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-527712406
 
 
   @anuengineer , @bharatviswa504 - Failures unrelated to patch.
   I have filed HDDS-2081, HDDS-2082, HDDS-2083, HDDS-2084, HDDS-2085 for the 
test failures observed here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320548487
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2ObjectsMap.java
 ##
 @@ -83,7 +83,7 @@ public void insertNewDatanode(UUID datanodeID, Set 
containerIDs)
*
* @param datanodeID - Datanode ID.
*/
-  void removeDatanode(UUID datanodeID) {
+  public void removeDatanode(UUID datanodeID) {
 
 Review comment:
   Sure. Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15760) Include Apache Commons Collections4

2019-09-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921877#comment-16921877
 ] 

Wei-Chiu Chuang commented on HADOOP-15760:
--

The latest version is 4.4 
[https://mvnrepository.com/artifact/org.apache.commons/commons-collections4]

Not sure about the actual migration effort. I suppose it's a breaking change 
because commons-collection and commons-collection4 have different artifact id, 
but given that commons-collection 3.2.1 is more than 10 years old and 3.2.2 is 
almost 4 years old, it's time to migrate.

> Include Apache Commons Collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0, 3.0.3
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-15760.1.patch
>
>
> Please allow for use of Apache Commons Collections 4 library with the end 
> goal of migrating from Apache Commons Collections 3.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] RogPodge opened a new pull request #1399: Fixed issue where old dns address would be cached and not updated

2019-09-03 Thread GitBox
RogPodge opened a new pull request #1399: Fixed issue where old dns address 
would be cached and not updated
URL: https://github.com/apache/hadoop/pull/1399
 
 
   Addresses the following issue:
   https://issues.apache.org/jira/browse/HADOOP-16543
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16538) S3AFilesystem trash handling should respect the current UGI

2019-09-03 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921775#comment-16921775
 ] 

Mingliang Liu commented on HADOOP-16538:


+1 for the proposal

> S3AFilesystem trash handling should respect the current UGI
> ---
>
> Key: HADOOP-16538
> URL: https://issues.apache.org/jira/browse/HADOOP-16538
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Priority: Major
>
> S3 move to trash currently relies upon System.getProperty(user.name). 
> Instead, it should be relying on the current UGI to figure out the username.
> getHomeDirectory needs to be overridden to use UGI instead of 
> System.getProperty



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] HeartSaVioR commented on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called.

2019-09-03 Thread GitBox
HeartSaVioR commented on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, 
path, boolean) to rename crc file as well when FileContext.rename(path, path, 
options) is called.
URL: https://github.com/apache/hadoop/pull/1388#issuecomment-527681513
 
 
   Thanks for reviewing. I addressed review comments, including rolling back to 
the copy-and-paste approach. Please take a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16439) Upgrade bundled Tomcat in branch-2

2019-09-03 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921769#comment-16921769
 ] 

Wei-Chiu Chuang commented on HADOOP-16439:
--

+1

> Upgrade bundled Tomcat in branch-2
> --
>
> Key: HADOOP-16439
> URL: https://issues.apache.org/jira/browse/HADOOP-16439
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16439-branch-2.000.patch, 
> HADOOP-16439-branch-2.001.patch
>
>
> proposed by  [~jojochuang] in mailing list:
> {quote}We migrated from Tomcat to Jetty in Hadoop3, because Tomcat 6 went EOL 
> in
>  2016. But we did not realize three years after Tomcat 6's EOL, a majority
>  of Hadoop users are still in Hadoop 2, and it looks like Hadoop 2 will stay
>  alive for another few years.
> Backporting Jetty to Hadoop2 is probably too big of an imcompatibility.
>  How about migrating to Tomcat9?
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16543) Cached DNS name resolution error

2019-09-03 Thread Roger Liu (Jira)
Roger Liu created HADOOP-16543:
--

 Summary: Cached DNS name resolution error
 Key: HADOOP-16543
 URL: https://issues.apache.org/jira/browse/HADOOP-16543
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.2
Reporter: Roger Liu


In Kubernetes, the a node may go down and then come back later with a different 
IP address. Yarn clients which are already running will be unable to rediscover 
the node after it comes back up due to caching the original IP address. This is 
problematic for cases such as Spark HA on Kubernetes, as the node containing 
the resource manager may go down and come back up, meaning existing node 
managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] HeartSaVioR commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, optio

2019-09-03 Thread GitBox
HeartSaVioR commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320518967
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestChecksumFs.java
 ##
 @@ -0,0 +1,130 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.local.LocalFs;
+import org.apache.hadoop.fs.permission.FsPermission;
+import static org.apache.hadoop.fs.CreateFlag.*;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
 
 Review comment:
   Uh-oh. Actually I followed the import order from some existing codebase. I 
guess it should be enforced - otherwise it falls into "Broken Windows Theory". 
Maybe some styleguide file would also help IDE to learn about what is preferred 
in this project.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16541) RM start fail due to zkCuratorManager connectionTimeoutMs,make it configurable

2019-09-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921767#comment-16921767
 ] 

Hadoop QA commented on HADOOP-16541:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 14m 
45s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 45s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 75 unchanged - 0 fixed = 77 total (was 75) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
37s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16541 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979178/HADOOP-16541_1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6399c3764c5e 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cfa41a4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16509/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16509/artifact/out/patch-compile-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16509/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test 

[GitHub] [hadoop] HeartSaVioR commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, optio

2019-09-03 Thread GitBox
HeartSaVioR commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320516721
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CheckedBiFunction.java
 ##
 @@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import java.io.IOException;
+
+/**
+ * Defines a functional interface having two inputs which throws IOException.
+ */
+@FunctionalInterface
+public interface CheckedBiFunction 
{
 
 Review comment:
   Actually CheckedBiFunction is already available in other place (HDDS) - I 
feel these interfaces/classes would be better to be moved into common module 
when Hadoop could forget about JDK7.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320505479
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -826,6 +828,28 @@ private VolumeList getVolumesByUser(String userNameKey)
 return count;
   }
 
+  @Override
+  public List getMultipartUploadKeys(
+  String volumeName, String bucketName, String prefix) throws IOException {
+List response = new ArrayList<>();
+TableIterator>
+iterator = getMultipartInfoTable().iterator();
+
+String prefixKey =
+OmMultipartUpload.getDbKey(volumeName, bucketName, prefix);
+iterator.seek(prefixKey);
+
+while (iterator.hasNext()) {
 
 Review comment:
   Here, we need to consider table cache also now HA/Non-HA code path is merged 
(HDDS-1909)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1398: HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398#discussion_r320503702
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -614,6 +615,12 @@ private void loadOMHAConfigs(Configuration conf) {
 " system with " + OZONE_OM_SERVICE_IDS_KEY + " and " +
 OZONE_OM_ADDRESS_KEY;
 throw new OzoneIllegalArgumentException(msg);
+  } else if (!isOMAddressSet && found == 0) {
 
 Review comment:
   And also try out test cases with multiple name services set also. And for 
this test case, you can use ozone.om.node.id if needed. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1398: HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398#discussion_r320503175
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -614,6 +615,12 @@ private void loadOMHAConfigs(Configuration conf) {
 " system with " + OZONE_OM_SERVICE_IDS_KEY + " and " +
 OZONE_OM_ADDRESS_KEY;
 throw new OzoneIllegalArgumentException(msg);
+  } else if (!isOMAddressSet && found == 0) {
 
 Review comment:
   We cannot add this condition here, because think of case
   OZONE_OM_SERVICE_IDS_KEY = ns1,ns2
   Because after one iteration of nameserviceID if we are not able to find any 
om node matching with nameservice, we should not throw illegal configuration. 
We should try out all name services and see if it is matching with any 
nameservice. 
   
   And also, we can eliminate iteration of for loop with 
OmUtils.emptyAsSingletonNull(omServiceIds))  and similar for nameServiceIds. As 
for HA this is a must required configuration which needs to be set.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1398: HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398#discussion_r320503175
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -614,6 +615,12 @@ private void loadOMHAConfigs(Configuration conf) {
 " system with " + OZONE_OM_SERVICE_IDS_KEY + " and " +
 OZONE_OM_ADDRESS_KEY;
 throw new OzoneIllegalArgumentException(msg);
+  } else if (!isOMAddressSet && found == 0) {
 
 Review comment:
   We cannot add this condition here, because think of case
   OZONE_OM_SERVICE_IDS_KEY = ns1,ns2
   Because after one iteration of nameserviceID if we are not able to find, we 
should not say illegal configuration. We should try out all name services and 
see if it is matching with any nameservice. 
   
   And also, we can eliminate iteration of for loop with 
OmUtils.emptyAsSingletonNull(omServiceIds))  and similar for nameServiceIds. As 
for HA this is a must required configuration which needs to be set.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] HeartSaVioR commented on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called.

2019-09-03 Thread GitBox
HeartSaVioR commented on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, 
path, boolean) to rename crc file as well when FileContext.rename(path, path, 
options) is called.
URL: https://github.com/apache/hadoop/pull/1388#issuecomment-527657687
 
 
   Sigh... I forgot Hadoop 2.x runs on JDK 7. Actually I just did 
copy-and-paste and expected review comment on deduplication, hence taking this 
way but forgot about that. 
   
   Thanks for reminding! I'll just roll back to copy-and-paste solution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] HeartSaVioR edited a comment on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is calle

2019-09-03 Thread GitBox
HeartSaVioR edited a comment on issue #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#issuecomment-527657687
 
 
   Sigh... I forgot Hadoop 2.x runs on JDK 7+. Actually I just did 
copy-and-paste and expected review comment on deduplication, hence taking this 
way but forgot about that. 
   
   Thanks for reminding! I'll just roll back to copy-and-paste solution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16439) Upgrade bundled Tomcat in branch-2

2019-09-03 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921748#comment-16921748
 ] 

Jonathan Hung commented on HADOOP-16439:


Thanks [~iwasakims]/[~jojochuang] for working on this, is this ready to be 
committed?

> Upgrade bundled Tomcat in branch-2
> --
>
> Key: HADOOP-16439
> URL: https://issues.apache.org/jira/browse/HADOOP-16439
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16439-branch-2.000.patch, 
> HADOOP-16439-branch-2.001.patch
>
>
> proposed by  [~jojochuang] in mailing list:
> {quote}We migrated from Tomcat to Jetty in Hadoop3, because Tomcat 6 went EOL 
> in
>  2016. But we did not realize three years after Tomcat 6's EOL, a majority
>  of Hadoop users are still in Hadoop 2, and it looks like Hadoop 2 will stay
>  alive for another few years.
> Backporting Jetty to Hadoop2 is probably too big of an imcompatibility.
>  How about migrating to Tomcat9?
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics 
in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487548
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
+  // datanode allocated with some allocation constrains compromised
+  @Metric private MutableCounterLong datanodeAllocationCompromiseCount;
+
+  public SCMPlacementMetrics() {
+  }
+
+  public static SCMPlacementMetrics create() {
 
 Review comment:
   can we add a helper to unregister the metrics?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics 
in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487351
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
+  // datanode allocated with some allocation constrains compromised
+  @Metric private MutableCounterLong datanodeAllocationCompromiseCount;
 
 Review comment:
   datanodeAllocationCompromiseCount -> datanodeChooseFallbackCount


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics 
in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487548
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
+  // datanode allocated with some allocation constrains compromised
+  @Metric private MutableCounterLong datanodeAllocationCompromiseCount;
+
+  public SCMPlacementMetrics() {
+  }
+
+  public static SCMPlacementMetrics create() {
 
 Review comment:
   can we add a helper to unregister the metrics?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics 
in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320487256
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
+  // datanode successful allocation count
+  @Metric private MutableCounterLong datanodeAllocationSuccessCount;
 
 Review comment:
   datanodeAllocationSuccessCount -> datanodeChooseSuccessCount


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics 
in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320486867
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
 
 Review comment:
   NIT: datanodeAllocationTryCount -> datanodeChooseAttemptCount


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1361: HDDS-1553. Add metrics 
in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#discussion_r320486867
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMPlacementMetrics.java
 ##
 @@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.placement.algorithms;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
+import org.apache.hadoop.metrics2.lib.MetricsRegistry;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+
+/**
+ * This class is for maintaining Topology aware placement statistics.
+ */
+@Metrics(about="SCM Placement Metrics", context = "ozone")
+public class SCMPlacementMetrics implements MetricsSource {
+  public static final String SOURCE_NAME =
+  SCMPlacementMetrics.class.getSimpleName();
+  private static final MetricsInfo RECORD_INFO = Interns.info(SOURCE_NAME,
+  "SCM Placement Metrics");
+  private static MetricsRegistry registry;
+
+  // total datanode allocation request count
+  @Metric private MutableCounterLong datanodeRequestCount;
+  // datanode allocation tried count, including success, fallback and failed
+  @Metric private MutableCounterLong datanodeAllocationTryCount;
 
 Review comment:
   NIT: datanodeAllocationTryCount -> datanodeSelectAttemptCount


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1371: HDDS-2018. Handle Set DtService of token for OM HA.

2019-09-03 Thread GitBox
bharatviswa504 merged pull request #1371: HDDS-2018. Handle Set DtService of 
token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1371: HDDS-2018. Handle Set DtService of token for OM HA.

2019-09-03 Thread GitBox
bharatviswa504 commented on issue #1371: HDDS-2018. Handle Set DtService of 
token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-527640423
 
 
   Thank You @xiaoyuyao for the review.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread GitBox
smengcl commented on issue #1398: HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398#issuecomment-527640041
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1371: HDDS-2018. Handle Set DtService of token for OM HA.

2019-09-03 Thread GitBox
xiaoyuyao commented on issue #1371: HDDS-2018. Handle Set DtService of token 
for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-527629438
 
 
   +1 pending CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-03 Thread GitBox
dineshchitlangia commented on issue #1386: HDDS-2015. Encrypt/decrypt key using 
symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-527626815
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-09-03 Thread GitBox
bharatviswa504 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA 
in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527626117
 
 
   Opened jira's HDDS-2078 and HDDS-2079 for secure cluster test failures.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-09-03 Thread GitBox
bharatviswa504 merged pull request #1225: HDDS-1909. Use new HA code for Non-HA 
in OM.
URL: https://github.com/apache/hadoop/pull/1225
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-09-03 Thread GitBox
bharatviswa504 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA 
in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527624889
 
 
   Thank You @arp7 for the review.
   I have committed this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-09-03 Thread GitBox
arp7 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527618227
 
 
   +1 to commit assuming the integration test failures are unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-09-03 Thread GitBox
arp7 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527617350
 
 
   Thanks @bharatviswa504. The patch looks pretty good to me.
   
   +1
   
   One remaining point was to self-terminate the OM in the RocksDB update 
failure path. It can be done in a separate jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-09-03 Thread GitBox
arp7 commented on a change in pull request #1225: HDDS-1909. Use new HA code 
for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#discussion_r320453906
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
 ##
 @@ -115,7 +119,36 @@ public OMResponse submitRequest(RpcController controller,
   }
 }
   } else {
-return submitRequestDirectlyToOM(request);
+OMClientResponse omClientResponse = null;
+long index = 0L;
+try {
+  OMClientRequest omClientRequest =
+  OzoneManagerRatisUtils.createClientRequest(request);
+  if (omClientRequest != null) {
+request = omClientRequest.preExecute(ozoneManager);
+index = transactionIndex.incrementAndGet();
+omClientRequest =
+OzoneManagerRatisUtils.createClientRequest(request);
+omClientResponse =
+omClientRequest.validateAndUpdateCache(ozoneManager, index,
+ozoneManagerDoubleBuffer::add);
+  } else {
+return submitRequestDirectlyToOM(request);
+  }
+} catch(IOException ex) {
+  // As some of the preExecute returns error. So handle here.
+  return createErrorResponse(request, ex);
+}
+
+try {
+  omClientResponse.getFlushFuture().get();
+  LOG.trace("Future for {} is completed", request);
+} catch (ExecutionException | InterruptedException ex) {
+  // Do we need to terminate OM here?
 
 Review comment:
   Yes I would err on the side of safety and self-terminate the OM here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl opened a new pull request #1398: [WIP] HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-03 Thread GitBox
smengcl opened a new pull request #1398: [WIP] HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398
 
 
   The WIP patch can prevent NPE in `TestOzoneManagerConfiguration` unit test 
`testWrongConfigurationNoOMNodes` and `testWrongConfigurationNoOMAddrs`. But it 
fails `testWrongConfiguration` and `testMultipleOMServiceIds`.
   
   I might need to place the fix logic in other places instead of 
`OzoneManager` to pass the other unit tests without modifying them? Please 
advice.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-513813503
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1058 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 695 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | trunk passed |
   | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 62 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 763 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 64 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 286 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3338 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux fbd29dcac157 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / acdb0a1 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-515439898
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1084 | trunk passed |
   | +1 | compile | 39 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 45 | trunk passed |
   | +1 | shadedclient | 721 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | -1 | findbugs | 57 | hadoop-tools/hadoop-aws in trunk has 1 extant 
findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 35 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 760 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 64 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 283 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3381 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 88c60c3d4efe 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / aebac6d |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/7/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/7/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-522079805
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1150 | trunk passed |
   | +1 | compile | 30 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 652 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -0 | checkstyle | 17 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 690 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 57 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 69 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3040 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux ff6197320ffa 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e356e4f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/11/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/11/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-519475520
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 87 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1199 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 800 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 830 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   | +1 | findbugs | 66 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 288 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3694 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux e8bdb209ab83 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 397a563 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/10/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/10/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-517509424
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 65 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1228 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 728 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 66 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | -1 | findbugs | 64 | hadoop-tools/hadoop-aws in trunk has 1 extant 
findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 38 | the patch passed |
   | +1 | javac | 38 | the patch passed |
   | -0 | checkstyle | 17 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 770 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | the patch passed |
   | +1 | findbugs | 74 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 297 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3611 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux d082a0689d0b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d086d05 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/8/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/8/testReport/ |
   | Max. process+thread count | 402 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-523912212
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1197 | trunk passed |
   | +1 | compile | 40 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 43 | trunk passed |
   | +1 | shadedclient | 769 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 64 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 62 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 40 | the patch passed |
   | +1 | compile | 33 | the patch passed |
   | +1 | javac | 33 | the patch passed |
   | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 771 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | the patch passed |
   | +1 | findbugs | 78 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 75 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3385 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux cda426a653be 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 69ddb36 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/13/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/13/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-519346572
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 97 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1407 | trunk passed |
   | +1 | compile | 40 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 43 | trunk passed |
   | +1 | shadedclient | 894 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 73 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 71 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 35 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 845 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | the patch passed |
   | +1 | findbugs | 72 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 279 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4053 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 81816a5a71da 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 70b4617 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/9/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/9/testReport/ |
   | Max. process+thread count | 409 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-525241947
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1109 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 747 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 30 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 34 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 768 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3254 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 8399edea302d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3329257 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/14/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/14/testReport/ |
   | Max. process+thread count | 418 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-523043584
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1211 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 41 | trunk passed |
   | +1 | shadedclient | 805 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 55 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 38 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 890 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | the patch passed |
   | +1 | findbugs | 71 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 88 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3572 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 846fe8ce6883 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 094d736 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/12/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/12/testReport/ |
   | Max. process+thread count | 331 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/12/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #609: HADOOP-16193. add extra S3A MPU 
test to see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#issuecomment-527293177
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1102 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 728 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 60 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 32 | the patch passed |
   | +1 | javac | 32 | the patch passed |
   | +1 | checkstyle | 21 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | the patch passed |
   | +1 | findbugs | 66 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 84 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3210 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/609 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 43ade3e885eb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/10/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-09-03 Thread GitBox
steveloughran commented on issue #609: HADOOP-16193. add extra S3A MPU test to 
see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#issuecomment-527600058
 
 
   ok, if this doesn't work as is -what should our next step be?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #609: HADOOP-16193. add extra S3A MPU 
test to see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#issuecomment-523934592
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 13 | https://github.com/apache/hadoop/pull/609 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/609 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/8/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-09-03 Thread GitBox
hadoop-yetus removed a comment on issue #609: HADOOP-16193. add extra S3A MPU 
test to see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#issuecomment-525274393
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1273 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 42 | trunk passed |
   | +1 | shadedclient | 875 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 69 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | +1 | checkstyle | 21 | the patch passed |
   | +1 | mvnsite | 34 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 901 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | the patch passed |
   | +1 | findbugs | 70 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 86 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3722 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/609 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f9dc8f84311c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3329257 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/9/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16187) ITestS3GuardToolDynamoDB test failures

2019-09-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921669#comment-16921669
 ] 

Steve Loughran commented on HADOOP-16187:
-

aah, this happens in branch-3.2 too

> ITestS3GuardToolDynamoDB test failures
> --
>
> Key: HADOOP-16187
> URL: https://issues.apache.org/jira/browse/HADOOP-16187
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> two tests failiing in ITestS3GuardToolDynamoDB
> * ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle
> * ITestS3GuardToolDynamoDB.testBucketInfoUnguarded



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1371: HDDS-2018. Handle Set DtService of token for OM HA.

2019-09-03 Thread GitBox
bharatviswa504 commented on issue #1371: HDDS-2018. Handle Set DtService of 
token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-527595115
 
 
   Thank You @xiaoyuyao for the review.
   Addressed review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320425899
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
+
+metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+try {
+
+  List multipartUploadKeys =
+  metadataManager
+  .getMultipartUploadKeys(volumeName, bucketName, prefix);
+
+  List collect = multipartUploadKeys.stream()
+  .map(OmMultipartUpload::from)
+  .map(upload -> {
+String dbKey = metadataManager
+.getOzoneKey(upload.getVolumeName(),
+upload.getBucketName(),
+upload.getKeyName());
+try {
+  Table openKeyTable =
+  metadataManager.getOpenKeyTable();
+
+  OmKeyInfo omKeyInfo =
+  openKeyTable.get(upload.getDbKey());
+  upload.setCreationTime(
+  Instant.ofEpochMilli(omKeyInfo.getCreationTime()));
+} catch (IOException e) {
+  LOG.warn(
+  "Open key entry for multipart upload record can be read  {}",
+  dbKey);
+}
+return upload;
+  })
+  .collect(Collectors.toList());
+
+  OmMultipartUploadList omMultipartUploadList =
+  new OmMultipartUploadList(collect);
+
+  return omMultipartUploadList;
+
+} catch (IOException ex) {
+  LOG.error("List Multipart Uploads Failed: volume: " + volumeName +
 
 Review comment:
   LOG.error("List Multipart Uploads Failed: volume: " + volumeName +
 "bucket: " + bucketName + "prefix: " + prefix, ex);
   to
   LOG.error("List Multipart Uploads Failed: volume: {} bucket: {} prefix: 
{}",volumeName, bucketName, prefix, ex);
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 edited a comment on issue #1277: HDDS-1054. List Multipart 
uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-527587093
 
 
   When i run the command I see below output, whereas it is not showing up 
bucketName, keyMarker other fields.
   ```
   bash-4.2$ aws s3api --endpoint http://s3g:9878 list-multipart-uploads 
--bucket b1234 --prefix mpu
   {
   "Uploads": [
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:35.916Z", 
   "UploadId": 
"24eea7f4-52db-4a0f-978a-f06cb7a57657-102730037717565440", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:37.816Z", 
   "UploadId": 
"81c0e5c2-db11-4b11-a5f7-81f48bdbfb04-102730037842083841", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey1", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:39.259Z", 
   "UploadId": 
"4aab75b8-1954-4e8a-a658-0d403bcbc42f-102730037936717826", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey2", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }
   ]
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320422434
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
 ##
 @@ -6,9 +6,9 @@
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
- * 
- * http://www.apache.org/licenses/LICENSE-2.0
- * 
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
 
 Review comment:
   Unintended change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on issue #1277: HDDS-1054. List Multipart uploads in a 
bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-527587093
 
 
   When i run the command I see below output, whereas it is not showing up 
bucketName, keyMarker other fields.
   bash-4.2$ aws s3api --endpoint http://s3g:9878 list-multipart-uploads 
--bucket b1234 --prefix mpu
   {
   "Uploads": [
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:35.916Z", 
   "UploadId": 
"24eea7f4-52db-4a0f-978a-f06cb7a57657-102730037717565440", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:37.816Z", 
   "UploadId": 
"81c0e5c2-db11-4b11-a5f7-81f48bdbfb04-102730037842083841", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey1", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }, 
   {
   "Initiator": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }, 
   "Initiated": "2019-09-03T18:39:39.259Z", 
   "UploadId": 
"4aab75b8-1954-4e8a-a658-0d403bcbc42f-102730037936717826", 
   "StorageClass": "STANDARD", 
   "Key": "mpukey2", 
   "Owner": {
   "DisplayName": "Not Supported", 
   "ID": "NOT-SUPPORTED"
   }
   }
   ]
   }


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1373: HDDS-2053. Fix TestOzoneManagerRatisServer failure. Contributed by Xi…

2019-09-03 Thread GitBox
xiaoyuyao commented on issue #1373: HDDS-2053. Fix TestOzoneManagerRatisServer 
failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#issuecomment-527580152
 
 
   Just try repeat the test run more than 1 times in IntelliJ, you will be able 
repro the metrics leak.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
swagle commented on issue #1366: HDDS-1577. Add default pipeline placement 
policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#issuecomment-527574643
 
 
   +1 overall pointed out a minor nit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320407326
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
 
 Review comment:
   prefix also should not be null. As prefix is also required in 
ListMultipartUploadRequest in proto.
   
   And also here we using "+" for concatentation, so if we pass null for 
prefix, then it will be /volume/bucket/null. The below method is called by 
getMultipartUploadKeys.
 public static String getDbKey(String volume, String bucket, String key) {
   return OM_KEY_PREFIX + volume + OM_KEY_PREFIX + bucket +
   OM_KEY_PREFIX + key;
 }


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
swagle commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320408049
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2ObjectsMap.java
 ##
 @@ -83,7 +83,7 @@ public void insertNewDatanode(UUID datanodeID, Set 
containerIDs)
*
* @param datanodeID - Datanode ID.
*/
-  void removeDatanode(UUID datanodeID) {
+  public void removeDatanode(UUID datanodeID) {
 
 Review comment:
   Should annotate these methods as @VisibleForTesting


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320407326
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
 
 Review comment:
   prefix also should not be null. As prefix is also required in 
ListMultipartUploadRequest in proto.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called.

2019-09-03 Thread GitBox
steveloughran commented on issue #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#issuecomment-527573101
 
 
   core patch LGTM for a branch 3+ patch; made some minor comments about tests 
and imports
   
   If you want this in branch-2 then I think a copy-and-paste solution is the 
simpler one


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, opt

2019-09-03 Thread GitBox
steveloughran commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320405606
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestChecksumFs.java
 ##
 @@ -0,0 +1,130 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.local.LocalFs;
+import org.apache.hadoop.fs.permission.FsPermission;
+import static org.apache.hadoop.fs.CreateFlag.*;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
 
 Review comment:
   order should be  
   
   ```
   java
   
   all-not org.apache, non-static
   
   org.apache.
   
   all static imports
   ```
   
   and in alphabetical order in blocks. It's not enforced the way scalacheck 
does for spark, but we try to have a rule, even if gets neglected. That static 
import block is critical.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320405582
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUpload.java
 ##
 @@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import java.time.Instant;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_KEY_PREFIX;
+
+/**
+ * Information about one initialized upload.
+ */
+public class OmMultipartUpload {
+
+  private String volumeName;
+
+  private String bucketName;
+
+  private String keyName;
+
+  private String uploadId;
+
+  private Instant creationTime;
+
+  public OmMultipartUpload(String volumeName, String bucketName,
+  String keyName, String uploadId) {
+this.volumeName = volumeName;
+this.bucketName = bucketName;
+this.keyName = keyName;
+this.uploadId = uploadId;
+  }
+
+  public OmMultipartUpload(String volumeName, String bucketName,
+  String keyName, String uploadId, Instant creationDate) {
+this.volumeName = volumeName;
+this.bucketName = bucketName;
+this.keyName = keyName;
+this.uploadId = uploadId;
+this.creationTime = creationDate;
+  }
+
+  public static OmMultipartUpload from(String key) {
+String[] split = key.split(OM_KEY_PREFIX);
+if (split.length < 5) {
+  throw new IllegalArgumentException("Key " + key
+  + " doesn't have enough segments to be a valid multpart upload key");
 
 Review comment:
   multpart -> multipart


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320405424
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
 ##
 @@ -555,6 +555,16 @@ public OzoneOutputStream createFile(String keyName, long 
size,
 .listStatus(volumeName, name, keyName, recursive, startKey, 
numEntries);
   }
 
+  /**
+   * Return with the list of the in-flight multipart uploads.
+   *
+   * @param prefix Optional string to filter for the selected keys.
+   */
+  public OzoneMultipartUploadList listMultpartUploads(String prefix)
 
 Review comment:
   listMultpartUploads -> listMultipartUploads


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1366: HDDS-1577. Add default 
pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320404457
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,237 @@
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing and 
network topology
+ * to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out viable nodes that either don't have enough size left
+ *or are too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ *described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
 
 Review comment:
   Sounds good to me. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, opt

2019-09-03 Thread GitBox
steveloughran commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320403035
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
 ##
 @@ -457,17 +458,35 @@ public boolean setReplication(Path src, short 
replication)
   @Override
   public void renameInternal(Path src, Path dst) 
 throws IOException, UnresolvedLinkException {
+renameInternal(src, dst, (s, d) -> getMyFs().rename(s, d));
+  }
+
+  @Override
+  public void renameInternal(Path src, Path dst, boolean overwrite)
+  throws AccessControlException, FileAlreadyExistsException,
+  FileNotFoundException, ParentNotDirectoryException,
+  UnresolvedLinkException, IOException {
+Options.Rename renameOpt = Options.Rename.NONE;
+if (overwrite) {
+  renameOpt = Options.Rename.OVERWRITE;
+}
+final Options.Rename opt = renameOpt;
+renameInternal(src, dst, (s, d) -> getMyFs().rename(s, d, opt));
 
 Review comment:
   Looking at this, its the more elegant functional API. Which is nice for 
hadoop 3+.  But I fear its probably going to lose all that elegance on branch-2 
(assuming you do want a backport). If you do, then simply copying the existing 
renameInternal to one with a new signature is going to be the simplest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-03 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r320401502
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1270,6 +1271,58 @@ public void abortMultipartUpload(OmKeyArgs omKeyArgs) 
throws IOException {
 
   }
 
+  @Override
+  public OmMultipartUploadList listMultipartUploads(String volumeName,
+  String bucketName, String prefix) throws OMException {
+Preconditions.checkNotNull(volumeName);
+Preconditions.checkNotNull(bucketName);
+
+metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+try {
+
+  List multipartUploadKeys =
+  metadataManager
+  .getMultipartUploadKeys(volumeName, bucketName, prefix);
+
+  List collect = multipartUploadKeys.stream()
+  .map(OmMultipartUpload::from)
+  .map(upload -> {
+String dbKey = metadataManager
+.getOzoneKey(upload.getVolumeName(),
+upload.getBucketName(),
+upload.getKeyName());
+try {
+  Table openKeyTable =
+  metadataManager.getOpenKeyTable();
+
+  OmKeyInfo omKeyInfo =
+  openKeyTable.get(upload.getDbKey());
 
 Review comment:
   Here we are reading openKeyTable only for getting creation time. If we can 
have this information in omMultipartKeyInfo, we could avoid DB calls for 
openKeyTable. 
   
   To do this, We can set creationTime in OmMultipartKeyInfo during 
initiateMultipartUpload . In this way, we can get all the required information 
from the MultipartKeyInfo table.
   
   And also StorageClass is missing from the returned OmMultipartUpload, as 
listMultipartUploads shows StorageClass information. For this, if we can return 
replicationType and depending on this value, we can set StorageClass in the 
listMultipartUploads Response.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1371: HDDS-2018. Handle Set DtService of token for OM HA.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1371: HDDS-2018. Handle Set 
DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#discussion_r320400072
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSelector.java
 ##
 @@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.security;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.token.Token;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.charset.StandardCharsets;
+import java.util.Collections;
+
+import static org.apache.hadoop.ozone.security.OzoneTokenIdentifier.KIND_NAME;
+
+/**
+ * Class to test OzoneDelegationTokenSelector.
+ */
+public class TestOzoneDelegationTokenSelector {
+
+
+  @Test
+  public void testTokenSelector() {
+
+// set dummy details for identifier and password in token.
+byte[] identifier =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+byte[] password =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+
+Token tokenIdentifierToken =
+new Token<>(identifier, password, KIND_NAME, getService());
+
+OzoneDelegationTokenSelector ozoneDelegationTokenSelector =
+new OzoneDelegationTokenSelector();
+
+Text service = new Text("om1:9862");
+
+Token selectedToken =
+   ozoneDelegationTokenSelector.selectToken(service,
+   Collections.singletonList(tokenIdentifierToken));
+
+
+Assert.assertNotNull(selectedToken);
+
+
+tokenIdentifierToken.setService(new Text("om1:9863"));
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
+
+Assert.assertNull(selectedToken);
+
+service = new Text("om1:9863");
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
 
 Review comment:
   Can we define a variable to avoid create three list?
Collections.singletonList(tokenIdentifierToken)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao closed pull request #1253: HDFS-8631. WebHDFS : Support setQuota

2019-09-03 Thread GitBox
sunchao closed pull request #1253: HDFS-8631. WebHDFS : Support setQuota
URL: https://github.com/apache/hadoop/pull/1253
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1371: HDDS-2018. Handle Set DtService of token for OM HA.

2019-09-03 Thread GitBox
xiaoyuyao commented on a change in pull request #1371: HDDS-2018. Handle Set 
DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#discussion_r320400072
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSelector.java
 ##
 @@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.security;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.token.Token;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.charset.StandardCharsets;
+import java.util.Collections;
+
+import static org.apache.hadoop.ozone.security.OzoneTokenIdentifier.KIND_NAME;
+
+/**
+ * Class to test OzoneDelegationTokenSelector.
+ */
+public class TestOzoneDelegationTokenSelector {
+
+
+  @Test
+  public void testTokenSelector() {
+
+// set dummy details for identifier and password in token.
+byte[] identifier =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+byte[] password =
+RandomStringUtils.randomAlphabetic(10)
+.getBytes(StandardCharsets.UTF_8);
+
+Token tokenIdentifierToken =
+new Token<>(identifier, password, KIND_NAME, getService());
+
+OzoneDelegationTokenSelector ozoneDelegationTokenSelector =
+new OzoneDelegationTokenSelector();
+
+Text service = new Text("om1:9862");
+
+Token selectedToken =
+   ozoneDelegationTokenSelector.selectToken(service,
+   Collections.singletonList(tokenIdentifierToken));
+
+
+Assert.assertNotNull(selectedToken);
+
+
+tokenIdentifierToken.setService(new Text("om1:9863"));
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
+
+Assert.assertNull(selectedToken);
+
+service = new Text("om1:9863");
+selectedToken =
+ozoneDelegationTokenSelector.selectToken(service,
+Collections.singletonList(tokenIdentifierToken));
 
 Review comment:
   Can we define a variable to avoid create three list?
Collections.singletonList(tokenIdentifierToken)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, opt

2019-09-03 Thread GitBox
steveloughran commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320399574
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestChecksumFs.java
 ##
 @@ -0,0 +1,130 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.local.LocalFs;
+import org.apache.hadoop.fs.permission.FsPermission;
+import static org.apache.hadoop.fs.CreateFlag.*;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+/**
+ * This class tests the functionality of ChecksumFs.
+ */
+public class TestChecksumFs {
+  private Configuration conf;
+  private Path testRootDirPath;
+  private FileContext fc;
+
+  @Before
+  public void setUp() throws Exception {
+conf = getTestConfiguration();
+fc = FileContext.getFileContext(conf);
+testRootDirPath = new Path(GenericTestUtils.getRandomizedTestDir()
+.getAbsolutePath());
+mkdirs(testRootDirPath);
+  }
+
+  public void tearDown() throws Exception {
+fc.delete(testRootDirPath, true);
+  }
+
+  @Test
+  public void testRenameFileToFile() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDst");
+verifyRename(srcPath, dstPath, false);
+  }
+
+  @Test
+  public void testRenameFileToFileWithOverwrite() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDst");
+verifyRename(srcPath, dstPath, true);
+  }
+
+  @Test
+  public void testRenameFileIntoDirFile() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDir/testRenameDst");
+mkdirs(dstPath);
+verifyRename(srcPath, dstPath, false);
+  }
+
+  @Test
+  public void testRenameFileIntoDirFileWithOverwrite() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDir/testRenameDst");
+mkdirs(dstPath);
+verifyRename(srcPath, dstPath, true);
+  }
+
+  private void verifyRename(Path srcPath, Path dstPath,
+  boolean overwrite) throws Exception {
+AbstractFileSystem fs = fc.getDefaultFileSystem();
+assertTrue(fs instanceof LocalFs);
+ChecksumFs checksumFs = (ChecksumFs) fs;
+
+fs.delete(srcPath, true);
+fs.delete(dstPath, true);
+
+Options.Rename renameOpt = Options.Rename.NONE;
+if (overwrite) {
+  renameOpt = Options.Rename.OVERWRITE;
+  createTestFile(checksumFs, dstPath, 2);
+}
+
+// ensure file + checksum are moved
+createTestFile(checksumFs, srcPath, 1);
+assertTrue(fc.util().exists(checksumFs.getChecksumFile(srcPath)));
 
 Review comment:
   can the assert add text as to what is failing?
   
   FWIW, we are moving to AssertJ for 3.2+ suites,  but  it would make back 
porting near-impossible. Just stick to junit asserts, but consider "what 
information would I want in a jenkins run to debug a failure?"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, opt

2019-09-03 Thread GitBox
steveloughran commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320397813
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestChecksumFs.java
 ##
 @@ -0,0 +1,130 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.local.LocalFs;
+import org.apache.hadoop.fs.permission.FsPermission;
+import static org.apache.hadoop.fs.CreateFlag.*;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+/**
+ * This class tests the functionality of ChecksumFs.
+ */
+public class TestChecksumFs {
+  private Configuration conf;
+  private Path testRootDirPath;
+  private FileContext fc;
+
+  @Before
+  public void setUp() throws Exception {
+conf = getTestConfiguration();
+fc = FileContext.getFileContext(conf);
+testRootDirPath = new Path(GenericTestUtils.getRandomizedTestDir()
+.getAbsolutePath());
+mkdirs(testRootDirPath);
+  }
+
+  public void tearDown() throws Exception {
+fc.delete(testRootDirPath, true);
+  }
+
+  @Test
+  public void testRenameFileToFile() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDst");
+verifyRename(srcPath, dstPath, false);
+  }
+
+  @Test
+  public void testRenameFileToFileWithOverwrite() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDst");
+verifyRename(srcPath, dstPath, true);
+  }
+
+  @Test
+  public void testRenameFileIntoDirFile() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDir/testRenameDst");
+mkdirs(dstPath);
+verifyRename(srcPath, dstPath, false);
+  }
+
+  @Test
+  public void testRenameFileIntoDirFileWithOverwrite() throws Exception {
+Path srcPath = new Path(testRootDirPath, "testRenameSrc");
+Path dstPath = new Path(testRootDirPath, "testRenameDir/testRenameDst");
+mkdirs(dstPath);
+verifyRename(srcPath, dstPath, true);
+  }
+
+  private void verifyRename(Path srcPath, Path dstPath,
+  boolean overwrite) throws Exception {
+AbstractFileSystem fs = fc.getDefaultFileSystem();
+assertTrue(fs instanceof LocalFs);
 
 Review comment:
   just cast it; you'll get a better stack trace that way


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, opt

2019-09-03 Thread GitBox
steveloughran commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320397235
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestChecksumFs.java
 ##
 @@ -0,0 +1,130 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.local.LocalFs;
+import org.apache.hadoop.fs.permission.FsPermission;
+import static org.apache.hadoop.fs.CreateFlag.*;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+/**
+ * This class tests the functionality of ChecksumFs.
+ */
+public class TestChecksumFs {
+  private Configuration conf;
+  private Path testRootDirPath;
+  private FileContext fc;
+
+  @Before
+  public void setUp() throws Exception {
+conf = getTestConfiguration();
+fc = FileContext.getFileContext(conf);
+testRootDirPath = new Path(GenericTestUtils.getRandomizedTestDir()
+.getAbsolutePath());
+mkdirs(testRootDirPath);
+  }
+
+  public void tearDown() throws Exception {
 
 Review comment:
   @After; must handle case where fc == null


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, opt

2019-09-03 Thread GitBox
steveloughran commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320396970
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestChecksumFs.java
 ##
 @@ -0,0 +1,130 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.util.EnumSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.local.LocalFs;
+import org.apache.hadoop.fs.permission.FsPermission;
+import static org.apache.hadoop.fs.CreateFlag.*;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+/**
+ * This class tests the functionality of ChecksumFs.
+ */
+public class TestChecksumFs {
 
 Review comment:
   If you don;t extend org.apache.hadoop.test.HadoopTestBase, look at that 
class and know that a timeout rule is not optional.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, opt

2019-09-03 Thread GitBox
steveloughran commented on a change in pull request #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#discussion_r320396059
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CheckedBiFunction.java
 ##
 @@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import java.io.IOException;
+
+/**
+ * Defines a functional interface having two inputs which throws IOException.
+ */
+@FunctionalInterface
+public interface CheckedBiFunction 
{
 
 Review comment:
   Cute! I never knew you could do that with templates and exceptions!
   
   1. can you put into org.apache.hadoop.fs.impl where the other 
internal-for-fs-only lambda stuff is going.
   1. be advised that for backports to branch 2 we will have to make things 
compile on Java 8. Mostly this is just using the IDE to convert things to 
callables. Doesn't mean they shouldn't be used, only that once you get 
sufficiently advanced things become unbackportable. This patch looks fine


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >