lakshmi-manasa-g commented on a change in pull request #1446:
URL: https://github.com/apache/samza/pull/1446#discussion_r537893226



##########
File path: samza-yarn/src/main/java/org/apache/samza/job/yarn/RackManager.java
##########
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.samza.job.yarn;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.api.records.NodeState;
+import org.apache.hadoop.yarn.client.api.impl.YarnClientImpl;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.samza.clustermanager.FaultDomain;
+import org.apache.samza.clustermanager.FaultDomainManager;
+import org.apache.samza.clustermanager.FaultDomainType;
+
+public class RackManager implements FaultDomainManager {
+
+  private final Map<String, FaultDomain> nodeToRackMap;
+
+  public RackManager() {
+        this.nodeToRackMap = computeNodeToFaultDomainMap();
+    }
+
+  /**
+   * This method returns all the rack values in a cluster for RUNNING nodes.
+   * @return a set of {@link FaultDomain}s
+   */
+  @Override
+  public Set<FaultDomain> getAllFaultDomains() {
+    return new HashSet<>(nodeToRackMap.values());
+  }
+
+  /**
+   * This method returns the rack a particular node resides on.
+   * @param host the host
+   * @return the {@link FaultDomain}
+   */
+  @Override
+  public FaultDomain getFaultDomainOfNode(String host) {
+    return nodeToRackMap.get(host);
+  }
+
+  /**
+   * This method checks if the two hostnames provided reside on the same rack.
+   * @param host1 hostname
+   * @param host2 hostname
+   * @return true if the hosts exist on the same rack
+   */
+  @Override
+  public boolean checkHostsOnSameFaultDomain(String host1, String host2) {
+    return nodeToRackMap.get(host1).equals(nodeToRackMap.get(host2));
+  }
+
+  /**
+   * This method gets the set of racks that the given active container's 
corresponding standby can be placed on.
+   * @param host The hostname of the active container
+   * @return the set of racks on which this active container's standby can be 
scheduled
+   */
+  @Override
+  public Set<FaultDomain> getAllowedFaultDomainsForSchedulingContainer(String 
host) {
+    FaultDomain activeContainerRack = nodeToRackMap.get(host);
+    Set<FaultDomain> standbyRacks = new HashSet<>(nodeToRackMap.values());
+    standbyRacks.remove(activeContainerRack);

Review comment:
       works for standby replication factor = 2 (aka one active + one standby). 
this pr guarantees that the standby is not on the same rack as active. If >2 
replication, then the standbys themselves might be on the same rack. need to 
call out that works for 2 and what to expect for >2. just in the pr desc is 
enough. 

##########
File path: 
samza-core/src/main/java/org/apache/samza/clustermanager/StandbyContainerManager.java
##########
@@ -375,18 +403,32 @@ boolean checkStandbyConstraints(String 
containerIdToStart, String host) {
       SamzaResource resource = 
samzaApplicationState.pendingProcessors.get(containerID);
 
       // return false if a conflicting container is pending for launch on the 
host
-      if (resource != null && resource.getHost().equals(host)) {
-        log.info("Container {} cannot be started on host {} because container 
{} is already scheduled on this host",
-            containerIdToStart, host, containerID);
-        return false;
+      if (resource != null) {
+        if (!resource.getHost().equals(ResourceRequestState.ANY_HOST) && 
!host.equals(ResourceRequestState.ANY_HOST)

Review comment:
       1. What if host = ANY_HOST but resource.getHost is not ANY_HOST Or vice 
versa?
   
   2. earlier in StandbyContainerManager - there is a request issue made with 
ANY_HOST and fault domain set.
   would it be possible that there is a pending processor with ANY_HOST as well 
but same fault domain?
   Actually since a resource request a set of allowed fault domains and not a 
specific fault domain - is it possible that a pending processor gets started on 
the same fault domain as this standby? 
   Or do we hope to catch it when that pending processor runs and invokes this 
same `checkStandbyConstraints` and find the current standby in 
runningProcessors and catch it in the check below? If this is the case, then 
what was the original logic behind checking pendingProcessors at all?

##########
File path: 
samza-core/src/main/java/org/apache/samza/clustermanager/StandbyContainerManager.java
##########
@@ -375,18 +403,32 @@ boolean checkStandbyConstraints(String 
containerIdToStart, String host) {
       SamzaResource resource = 
samzaApplicationState.pendingProcessors.get(containerID);
 
       // return false if a conflicting container is pending for launch on the 
host
-      if (resource != null && resource.getHost().equals(host)) {
-        log.info("Container {} cannot be started on host {} because container 
{} is already scheduled on this host",
-            containerIdToStart, host, containerID);
-        return false;
+      if (resource != null) {
+        if (!resource.getHost().equals(ResourceRequestState.ANY_HOST) && 
!host.equals(ResourceRequestState.ANY_HOST)
+                && faultDomainManager.checkHostsOnSameFaultDomain(host, 
resource.getHost())) {
+          log.info("Container {} cannot be started on host {} because 
container {} is already scheduled on this rack",
+                  containerIdToStart, host, containerID);
+          return false;
+        } else if (resource.getHost().equals(host)) {
+          log.info("Container {} cannot be started on host {} because 
container {} is already scheduled on this host",
+                  containerIdToStart, host, containerID);
+          return false;
+        }
       }
 
       // return false if a conflicting container is running on the host
       resource = samzaApplicationState.runningProcessors.get(containerID);
-      if (resource != null && resource.getHost().equals(host)) {
-        log.info("Container {} cannot be started on host {} because container 
{} is already running on this host",
-            containerIdToStart, host, containerID);
-        return false;
+      if (resource != null) {
+        if (!resource.getHost().equals(ResourceRequestState.ANY_HOST) && 
!host.equals(ResourceRequestState.ANY_HOST)
+                && faultDomainManager.checkHostsOnSameFaultDomain(host, 
resource.getHost())) {
+          log.info("Container {} cannot be started on host {} because 
container {} is already running on this rack",
+                  containerIdToStart, host, containerID);
+          return false;
+        } else if (resource.getHost().equals(host)) {
+          log.info("Container {} cannot be started on host {} because 
container {} is already running on this host",
+                  containerIdToStart, host, containerID);
+          return false;

Review comment:
       looks the same as above check. good to make a helper?? wdyt?

##########
File path: 
samza-core/src/main/java/org/apache/samza/clustermanager/StandbyContainerManager.java
##########
@@ -375,18 +403,32 @@ boolean checkStandbyConstraints(String 
containerIdToStart, String host) {
       SamzaResource resource = 
samzaApplicationState.pendingProcessors.get(containerID);
 
       // return false if a conflicting container is pending for launch on the 
host
-      if (resource != null && resource.getHost().equals(host)) {
-        log.info("Container {} cannot be started on host {} because container 
{} is already scheduled on this host",
-            containerIdToStart, host, containerID);
-        return false;
+      if (resource != null) {
+        if (!resource.getHost().equals(ResourceRequestState.ANY_HOST) && 
!host.equals(ResourceRequestState.ANY_HOST)
+                && faultDomainManager.checkHostsOnSameFaultDomain(host, 
resource.getHost())) {
+          log.info("Container {} cannot be started on host {} because 
container {} is already scheduled on this rack",
+                  containerIdToStart, host, containerID);
+          return false;
+        } else if (resource.getHost().equals(host)) {
+          log.info("Container {} cannot be started on host {} because 
container {} is already scheduled on this host",
+                  containerIdToStart, host, containerID);
+          return false;
+        }
       }
 
       // return false if a conflicting container is running on the host
       resource = samzaApplicationState.runningProcessors.get(containerID);
-      if (resource != null && resource.getHost().equals(host)) {
-        log.info("Container {} cannot be started on host {} because container 
{} is already running on this host",
-            containerIdToStart, host, containerID);
-        return false;
+      if (resource != null) {
+        if (!resource.getHost().equals(ResourceRequestState.ANY_HOST) && 
!host.equals(ResourceRequestState.ANY_HOST)

Review comment:
       1. What if host = ANY_HOST but resource.getHost is not ANY_HOST?
   
   2. when alternative resources are being used - checkStandbyAndRunStreamProc 
is called with ANY_HOST and alternativeResource.get (which is a resource with a 
proper host name and not ANY_HOST).. are we handling this scenario? looks like 
it might be handled (checkStandbyConstraints in this case is called with ctrId 
and proper host name) but wanted a double check on that. 
   
   3. iiuc, a resource is added to SamzaApplicationState.runningProcessors in 
YarnClusterResourceManager.onContainerStarted which actually creates a new 
SamzaResource with values taken from Yarn Container -- and hence will not have 
ANY_HOST. right? hence maybe we can simplify this check? or was there another 
reason (like guarding against ANY_HOST in resource)?
   

##########
File path: 
samza-core/src/test/java/org/apache/samza/clustermanager/MockFaultDomainManager.java
##########
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.samza.clustermanager;
+
+import com.google.common.collect.ImmutableMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+public class MockFaultDomainManager implements FaultDomainManager {
+
+  private final Map<String, FaultDomain> nodeToFaultDomainMap;
+
+  public MockFaultDomainManager() {
+    FaultDomain faultDomain1 = new FaultDomain(FaultDomainType.RACK, "rack-1");
+    FaultDomain faultDomain2 = new FaultDomain(FaultDomainType.RACK, "rack-2");
+    FaultDomain faultDomain3 = new FaultDomain(FaultDomainType.RACK, "rack-1");
+    FaultDomain faultDomain4 = new FaultDomain(FaultDomainType.RACK, "rack-3");
+    FaultDomain faultDomain5 = new FaultDomain(FaultDomainType.RACK, "rack-4");
+    nodeToFaultDomainMap = ImmutableMap.of("host-1", faultDomain1, "host-2", 
faultDomain2,

Review comment:
       might be good to have at least 2 hosts in same rack to be able to test 

##########
File path: samza-yarn/src/main/java/org/apache/samza/job/yarn/RackManager.java
##########
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.samza.job.yarn;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.api.records.NodeState;
+import org.apache.hadoop.yarn.client.api.impl.YarnClientImpl;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.samza.clustermanager.FaultDomain;
+import org.apache.samza.clustermanager.FaultDomainManager;
+import org.apache.samza.clustermanager.FaultDomainType;
+
+public class RackManager implements FaultDomainManager {
+
+  private final Map<String, FaultDomain> nodeToRackMap;
+
+  public RackManager() {
+        this.nodeToRackMap = computeNodeToFaultDomainMap();
+    }
+
+  /**
+   * This method returns all the rack values in a cluster for RUNNING nodes.
+   * @return a set of {@link FaultDomain}s
+   */
+  @Override
+  public Set<FaultDomain> getAllFaultDomains() {
+    return new HashSet<>(nodeToRackMap.values());
+  }
+
+  /**
+   * This method returns the rack a particular node resides on.
+   * @param host the host
+   * @return the {@link FaultDomain}
+   */
+  @Override
+  public FaultDomain getFaultDomainOfNode(String host) {
+    return nodeToRackMap.get(host);
+  }
+
+  /**
+   * This method checks if the two hostnames provided reside on the same rack.
+   * @param host1 hostname
+   * @param host2 hostname
+   * @return true if the hosts exist on the same rack
+   */
+  @Override
+  public boolean checkHostsOnSameFaultDomain(String host1, String host2) {
+    return nodeToRackMap.get(host1).equals(nodeToRackMap.get(host2));
+  }
+
+  /**
+   * This method gets the set of racks that the given active container's 
corresponding standby can be placed on.
+   * @param host The hostname of the active container
+   * @return the set of racks on which this active container's standby can be 
scheduled
+   */
+  @Override
+  public Set<FaultDomain> getAllowedFaultDomainsForSchedulingContainer(String 
host) {
+    FaultDomain activeContainerRack = nodeToRackMap.get(host);
+    Set<FaultDomain> standbyRacks = new HashSet<>(nodeToRackMap.values());
+    standbyRacks.remove(activeContainerRack);
+    return standbyRacks;
+  }
+
+  /**
+   * This method returns the cached map of nodes to racks.
+   * @return stored map of node to the rack it resides on
+   */
+  @Override
+  public Map<String, FaultDomain> getNodeToFaultDomainMap() {
+    return nodeToRackMap;
+  }
+
+  /**
+   * This method gets the node to rack (fault domain for Yarn) mapping from 
Yarn for all running nodes.
+   * @return A map of hostname to rack name.
+   */
+  @Override
+  public Map<String, FaultDomain> computeNodeToFaultDomainMap() {
+    YarnClientImpl yarnClient = new YarnClientImpl();
+    Map<String, FaultDomain> nodeToRackMap = new HashMap<>();
+    try {
+      List<NodeReport> nodeReport = 
yarnClient.getNodeReports(NodeState.RUNNING);
+      nodeReport.forEach(report -> {
+        FaultDomain rack = new FaultDomain(FaultDomainType.RACK, 
report.getRackName());
+        nodeToRackMap.put(report.getNodeId().getHost(), rack);
+      });
+    } catch (YarnException e) {
+      e.printStackTrace();

Review comment:
       this exception is swallowed no.. what happens to the rack manager in 
this case? will it still give a correct view of the cluster's host->rack 
mapping? how do we ensure the feature still works? 
   
   it possibly returns an empty map. then, nodeToRackMap.get(host) in 
`getAllowedFaultDomainsForSchedulingContainer` above will return null --> and 
removing a null from Set could throw NPE (HashSet doesnt throw i think). 
   Even if NPE doesnt happen, `getAllowedFaultDomainsForSchedulingContainer` 
returns an empty set. What is the behavior of this feature when a resource 
request is made with an empty set of racks? will Yarn just pick any rack or 
fail the request?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to