Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/731#issuecomment-85795396
  
    @CodingCat Thanks for merging the two cases. However, I still find the code 
overly complex. In particular, I think we can modify far less by rewriting 
[this 
section](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/master/Master.scala#L578)
 of the original code with something like the following:
    ```
    // Now that we've decided how many cores to give on each node, let's 
actually give them
    for (pos <- 0 until numUsable) {
      while (assigned(pos) > 0) {
        val maxCoresPerExecutor = 
app.maxCoresPerExecutor.getOrElse(Int.MaxValue)
        val coresForThisExecutor = math.min(maxCoresPerExecutor, assigned(pos))
        val exec = app.addExecutor(usableWorkers(pos), coresForThisExecutor)
        assigned(pos) -= coresForThisExecutor
        launchExecutor(usableWorkers(pos), exec)
        app.state = ApplicationState.RUNNING
      }
    }
    ```
    then in the else case (where we don't spread out apps), we have to do 
something similar.
    
    Another potential issue is that if we schedule multiple executors that 
belong to the same app on the same worker, we might not be able to distinguish 
them. This is what Patrick was suggesting on the 
[JIRA](https://issues.apache.org/jira/browse/SPARK-1706) (see point 3). I 
haven't verified whether this is actually a problem, but it would be good to 
keep it in mind.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to