Github user sryza commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6394#discussion_r31972804
  
    --- Diff: 
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
    @@ -225,12 +243,74 @@ private[yarn] class YarnAllocator(
           logInfo(s"Will request $missing executor containers, each with 
${resource.getVirtualCores} " +
             s"cores and ${resource.getMemory} MB memory including 
$memoryOverhead MB overhead")
     
    -      for (i <- 0 until missing) {
    -        val request = createContainerRequest(resource)
    -        amClient.addContainerRequest(request)
    -        val nodes = request.getNodes
    -        val hostStr = if (nodes == null || nodes.isEmpty) "Any" else 
nodes.last
    -        logInfo(s"Container request (host: $hostStr, capability: 
$resource)")
    +      // Calculated the number of executors we expected to satisfy all the 
preferred locality tasks
    +      val localityAwareTaskCores = localityAwarePendingTaskNum * 
CPUS_PER_TASK
    +      val expectedLocalityAwareContainerNum =
    +        (localityAwareTaskCores + resource.getVirtualCores - 1) / 
resource.getVirtualCores
    +
    +      // Get the all the existed and locality matched containers
    +      val existedMatchedContainers = allocatedHostToContainersMap.filter { 
case (host, _) =>
    +        preferredLocalityToCounts.contains(host)
    +      }
    +      val existedMatchedContainerNum = 
existedMatchedContainers.values.map(_.size).sum
    +
    +      // The number of containers to allocate, divided into two groups, 
one with node locality,
    +      // and the other without locality preference.
    +      var requiredLocalityFreeContainerNum: Int = 0
    +      var requiredLocalityAwareContainerNum: Int = 0
    +
    +      if (expectedLocalityAwareContainerNum <= existedMatchedContainerNum) 
{
    +        // If the current allocated executor can satisfy all the locality 
preferred tasks,
    --- End diff --
    
    This is a little weird to me.  IIUC, what we're saying here is:
    * Find all the containers from all the nodes that have at least one task 
that would be happy to be there.
    * If, taken together, these containers have enough capacity to run all 
pending tasks with locality preferences, none of the executor requests we 
submit need to have locality preferences.
    
    This would result in sub-optimal behavior in the following situation:
    * We have 48 tasks that have locality preferences distributed across a wide 
number of nodes, including one task that can run on either node 1 or node 2.
    * Node 1 and node 2 each have 6 executors with 4 cores each.
    In this situation, we'd end up giving up on locality-based requests, even 
though it would make sense to request executors on some of the nodes that the 
tasks want to be on.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to