Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r27915709
--- Diff: docs/configuration.md ---
@@ -714,6 +714,15 @@ Apart from these, the following properties are also
available, and may be useful
</td>
</tr>
<tr>
+ <td><code>spark.deploy.maxCoresPerExecutor</code></td>
+ <td>(infinite)</td>
+ <td>
+ The maximum number of cores given to the executor. When this parameter
is set, Spark will try to
+ run more than 1 executors on each worker in standalone mode;
otherwise, only one executor is
+ launched on each worker.
--- End diff --
We should note that this is 1 executor per application. Technically a
worker can still run multiple executors if they belong to different
applications. I would rephrase this as:
"""
The maximum number of cores given to an executor. When this parameter is
set, multiple executors from the same application may run on the same worker,
each with cores equal to or fewer than the configured value. Otherwise, at most
one executor per application may run on each worker, as that executor will
acquire all the worker's cores by default.
This is used in standalone mode only.
"""
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]