Github user sryza commented on the pull request:

    https://github.com/apache/spark/pull/5063#issuecomment-82763410
  
    Are you saying that the new configuration option controls the number of 
cores allocated to the executor by Mesos *not* for use by tasks?
    
    Or is it that the executor can go above the configured number but never 
below it? 
    
    In either case, I don't think it's a good idea to make this Mesos-specific. 
 In theory we might want to add something similar for Spark on YARN later.  If 
it's the former, maybe something like `spark.executor.frameworkCores` and if 
it's the latter, maybe something like `spark.executor.minCores`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to