Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/731#issuecomment-75347509
  
    @CodingCat I looked at this patch much more closely, and I still don't see 
a need to separate the single and the multiple executors per worker cases. More 
specifically, I see the single executor case as a special case of the multiple 
executors case, where each element of your 2D array will have a list of 1 
element (because there's only one executor).
    
    I think it makes more sense to configure the number of executors per worker 
directly. Perhaps we need a config that looks something like 
`spark.deploy.executorsPerWorker`. Then, to avoid one executor from grabbing 
all the cores on that worker, the user will also need to set 
`spark.executor.cores`. In fact, we're doing something fairly similar in Mesos 
coarse-grained mode in this PR #4027. It would be good to model the general 
structure of the changes here after the one there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to