[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-04 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320642122
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
+  + " datanodes required. Found %d",
+  nodesRequired, healthyNodes.size());
+  LOG

[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589992
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
+  + " datanodes required. Found %d",
+  nodesRequired, healthyNodes.size());
+  LOG

[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589259
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589220
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache

[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589153
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
+  + " datanodes required. Found %d",
+  nodesRequired, healthyNodes.size());
+  LOG

[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589190
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
 
 Review comment:
   Updated.


This is an 

[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589057
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,291 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing
+ * and network topology to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out nodes that are not too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ * described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+  @VisibleForTesting
+  static final Logger LOG =
+  LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+  private final NodeManager nodeManager;
+  private final Configuration conf;
+  private final int heavyNodeCriteria;
+
+  /**
+   * Constructs a Container Placement with considering only capacity.
+   * That is this policy tries to place containers based on node weight.
+   *
+   * @param nodeManager Node Manager
+   * @param confConfiguration
+   */
+  public PipelinePlacementPolicy(final NodeManager nodeManager,
+ final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavyNodeCriteria = conf.getInt(
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+ScmConfigKeys.OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+  }
+
+  /**
+   * Returns true if this node meets the criteria.
+   *
+   * @param datanodeDetails DatanodeDetails
+   * @return true if we have enough space.
+   */
+  @VisibleForTesting
+  boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long heavyNodeLimit) {
+return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  }
+
+  /**
+   * Filter out viable nodes based on
+   * 1. nodes that are healthy
+   * 2. nodes that are not too heavily engaged in other pipelines
+   *
+   * @param excludedNodes - excluded nodes
+   * @param nodesRequired - number of datanodes required.
+   * @return a list of viable nodes
+   * @throws SCMException when viable nodes are not enough in numbers
+   */
+  List filterViableNodes(
+  List excludedNodes, int nodesRequired)
+  throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+  healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+  msg = "No healthy node found to allocate container.";
+  LOG.error(msg);
+  throw new SCMException(msg, SCMException.ResultCodes
+  .FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+  msg = String.format("Not enough healthy nodes to allocate container. %d "
+  + " datanodes required. Found %d",
+  nodesRequired, healthyNodes.size());
+  LOG

[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589634
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 ##
 @@ -199,4 +220,10 @@ void processNodeReport(DatanodeDetails datanodeDetails,
* @return the given datanode, or null if not found
*/
   DatanodeDetails getNodeByAddress(String address);
+
+  /**
+   * Get cluster map as in network topology for this node manager.
+   * @return cluster map
+   */
+  NetworkTopology getClusterMap();
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589768
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 ##
 @@ -129,6 +138,18 @@
*/
   void removePipeline(Pipeline pipeline);
 
+  /**
+   * Get the entire Node2PipelineMap.
+   * @return Node2PipelineMap
+   */
+  Node2PipelineMap getNode2PipelineMap();
+
+  /**
+   * Set the Node2PipelineMap.
+   * @param node2PipelineMap Node2PipelineMap
+   */
+  void setNode2PipelineMap(Node2PipelineMap node2PipelineMap);
 
 Review comment:
   Removed from NodeManager interface.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589875
 
 

 ##
 File path: hadoop-ozone/pom.xml
 ##
 @@ -19,7 +19,7 @@
 3.2.0
 
   
-  hadoop-ozone
+  hadoop-_ozone
 
 Review comment:
   Typo. Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320589570
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
 ##
 @@ -129,6 +138,18 @@
*/
   void removePipeline(Pipeline pipeline);
 
+  /**
+   * Get the entire Node2PipelineMap.
+   * @return Node2PipelineMap
+   */
+  Node2PipelineMap getNode2PipelineMap();
+
+  /**
+   * Set the Node2PipelineMap.
+   * @param node2PipelineMap Node2PipelineMap
+   */
+  void setNode2PipelineMap(Node2PipelineMap node2PipelineMap);
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-03 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r320548487
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2ObjectsMap.java
 ##
 @@ -83,7 +83,7 @@ public void insertNewDatanode(UUID datanodeID, Set 
containerIDs)
*
* @param datanodeID - Datanode ID.
*/
-  void removeDatanode(UUID datanodeID) {
+  public void removeDatanode(UUID datanodeID) {
 
 Review comment:
   Sure. Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-08-29 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r319342738
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,237 @@
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing and 
network topology
+ * to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out viable nodes that either don't have enough size left
+ *or are too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ *described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
+@VisibleForTesting
+static final Logger LOG =
+LoggerFactory.getLogger(PipelinePlacementPolicy.class);
+private final NodeManager nodeManager;
+private final Configuration conf;
+private final int heavy_node_criteria;
+
+/**
+ * Constructs a Container Placement with considering only capacity.
+ * That is this policy tries to place containers based on node weight.
+ *
+ * @param nodeManager Node Manager
+ * @param conf Configuration
+ */
+public PipelinePlacementPolicy(final NodeManager nodeManager,
+   final Configuration conf) {
+super(nodeManager, conf);
+this.nodeManager = nodeManager;
+this.conf = conf;
+heavy_node_criteria = 
conf.getInt(ScmConfigKeys.OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT,
+
ScmConfigKeys.OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT);
+}
+
+/**
+ * Returns true if this node meets the criteria.
+ *
+ * @param datanodeDetails DatanodeDetails
+ * @return true if we have enough space.
+ */
+boolean meetCriteria(DatanodeDetails datanodeDetails,
+   long sizeRequired) {
+SCMNodeMetric nodeMetric = nodeManager.getNodeStat(datanodeDetails);
+boolean hasEnoughSpace = (nodeMetric != null) && (nodeMetric.get() != 
null)
+&& nodeMetric.get().getRemaining().hasResources(sizeRequired);
+boolean loadNotTooHeavy = 
nodeManager.getPipelinesCount(datanodeDetails) <= heavy_node_criteria;
+return hasEnoughSpace && loadNotTooHeavy;
+}
+
+/**
+ * Filter out viable nodes based on
+ * 1. nodes that are healthy
+ * 2. nodes that have enough space
+ * 3. nodes that are not too heavily engaged in other pipelines
+ * @param excludedNodes - excluded nodes
+ * @param nodesRequired - number of datanodes required.
+ * @param sizeRequired - size required for the container or block.
+ * @return a list of viable nodes
+ * @throws SCMException when viable nodes are not enough in numbers
+ */
+List filterViableNodes(List 
excludedNodes,
+int nodesRequired, final long 
sizeRequired) throws SCMException {
+// get nodes in HEALTHY state
+List healthyNodes =
+nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+if (excludedNodes != null) {
+healthyNodes.removeAll(excludedNodes);
+}
+String msg;
+if (healthyNodes.size() == 0) {
+msg = "No healthy node found to allocate container.";
+LOG.error(msg);
+throw new SCMException(msg, SCMException.ResultCodes
+.FAILED_TO_FIND_HEALTHY_NODES);
+}
+
+if (healthyNodes.size() < nodesRequired) {
+msg = String.format("Not enough healthy nodes to allocate 
container. %d "
++ " datanodes required. Found %d",
+nodesRequired, healthyNodes.size());
+LOG.error(msg);
+throw new SCMException(msg,
+SCMException.ResultCodes.FAILE

[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-08-29 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r318928096
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -0,0 +1,237 @@
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Pipeline placement policy that choose datanodes based on load balancing and 
network topology
+ * to supply pipeline creation.
+ * 
+ * 1. get a list of healthy nodes
+ * 2. filter out viable nodes that either don't have enough size left
+ *or are too heavily engaged in other pipelines
+ * 3. Choose an anchor node among the viable nodes which follows the algorithm
+ *described @SCMContainerPlacementCapacity
+ * 4. Choose other nodes around the anchor node based on network topology
+ */
+public final class PipelinePlacementPolicy extends SCMCommonPolicy {
 
 Review comment:
   I agree the overall policy interface needs to be refactored and renamed. I 
would say SCMCommonPolicy would be a great base and maybe we can even have a 
OzoneNodesDistributionCommonPolicy.
   
   This will be tracked in https://issues.apache.org/jira/browse/HDDS-1571.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-08-29 Thread GitBox
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#discussion_r318925940
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -329,6 +329,10 @@
   "ozone.scm.pipeline.owner.container.count";
   public static final int OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT = 3;
 
+  public static final String OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT =
 
 Review comment:
   Sure. Added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org