Github user 10110346 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19832#discussion_r166266580
  
    --- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala 
---
    @@ -671,10 +671,23 @@ private[deploy] class Master(
           // If the cores left is less than the coresPerExecutor,the cores 
left will not be allocated
           if (app.coresLeft >= coresPerExecutor) {
             // Filter out workers that don't have enough resources to launch 
an executor
    -        val usableWorkers = workers.toArray.filter(_.state == 
WorkerState.ALIVE)
    +        var usableWorkers = workers.toArray.filter(_.state == 
WorkerState.ALIVE)
               .filter(worker => worker.memoryFree >= 
app.desc.memoryPerExecutorMB &&
                 worker.coresFree >= coresPerExecutor)
               .sortBy(_.coresFree).reverse
    +
    +        if (spreadOutApps) {
    --- End diff --
    
    when we set `spark.dynamicAllocation.enabled=true`,
     for the same app, many executators are always assigned on the same node.
    Can we make changes to the situation of dynamic allocation?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to