Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-93132743
I talked to @sryza and @tnachen offline about the potential sources of
confusion here. It seems that this code used to mistakenly use
`spark.task.cpus` as the number of cores to give Mesos executors, which is
incorrect but happens to be fine because `spark.task.cpus` defaults to 1.
I left a more detailed comment about why the existing name of the config
introduces another potential source of confusion. In general, when making
changes in the Spark on Mesos part of the code base, we should be explicit
about which kind of task and executor we are referring to, since these terms
unfortunately have overloaded meanings at the intersection of these two
projects.
@jongyoul The intended change in behavior here LGTM. Once you address the
wording / naming comments I left I will go ahead and merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]