Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/731#issuecomment-89100311
  
    @CodingCat I never suggested that we grab all cores in spread out mode. The 
decision of how many cores to give each worker is the same as before. What's 
different is how we translate those cores into executors. Previously we launch 
one executor with all the cores given to a worker. Now I am suggesting that we 
launch multiple executors on the worker, each of which has at most 
`spark.deploy.maxCoresPerExecutor` cores. Note that if `maxCoresPerExecutor` is 
not defined, the behavior is the same as the old one, where we just launch one 
giant executor on the worker with all the cores it has been given.
    
    Does that make sense?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to