Github user steveloughran commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6394#discussion_r31427020
  
    --- Diff: 
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
    @@ -253,10 +269,26 @@ private[yarn] class YarnAllocator(
        * added in recent versions.
        */
       private def createContainerRequest(resource: Resource): ContainerRequest 
= {
    +    // filter nodes which don't have containers but node preference is 
required.
    +    val nodes = {
    +      val unusedNodes = preferredNodeLocationToUses.filterKeys(_ == 
false).keys
    +      if (unusedNodes.isEmpty) {
    +        null
    +      } else {
    +        unusedNodes.toArray
    +      }
    +    }
    +
    +    val racks = if (preferredRackLocations.isEmpty) {
    +      null
    +    } else {
    +      preferredRackLocations.toArray
    +    }
    +
         nodeLabelConstructor.map { constructor =>
    -      constructor.newInstance(resource, null, null, RM_REQUEST_PRIORITY, 
true: java.lang.Boolean,
    +      constructor.newInstance(resource, nodes, racks, RM_REQUEST_PRIORITY, 
true: java.lang.Boolean,
    --- End diff --
    
    YARN blows up if you you try to create requests with "relax=true" and 
"relax=false" on the same priority, or if you try and set relax=true on a 
request with locality requirements. If that's not surfaced yet is possibly just 
luck as requests are being satisfied fast enough the conflict  hasn't surfaced.
    
    I'd recommend
    1. only set that relax bit if the nodes or racks lists are non-null
    2. use a separate priority for relaxed vs unrelaxed requests.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to