[
https://issues.apache.org/jira/browse/SPARK-33446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-33446:
------------------------------------
Assignee: (was: Apache Spark)
> [CORE] Add config spark.executor.coresOverhead
> ----------------------------------------------
>
> Key: SPARK-33446
> URL: https://issues.apache.org/jira/browse/SPARK-33446
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 3.0.1
> Reporter: Zhongwei Zhu
> Priority: Major
>
> Add config spark.executor.coresOverhead to request extra cores per executor.
> This config will be helpful in below cases:
> Suppose for physical machines or vm, the memory/cpu ratio is 3GB/core. But we
> run spark job, we want to have 6GB per task. If we request resource in such
> way, there will be resource waste.
> If we request extra cores without affecting cores per executor for task
> allocation, extra cores won't be wasted.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]