[ 
https://issues.apache.org/jira/browse/SPARK-33446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17231693#comment-17231693
 ] 

Apache Spark commented on SPARK-33446:
--------------------------------------

User 'warrenzhu25' has created a pull request for this issue:
https://github.com/apache/spark/pull/30370

> [CORE] Add config spark.executor.coresOverhead
> ----------------------------------------------
>
>                 Key: SPARK-33446
>                 URL: https://issues.apache.org/jira/browse/SPARK-33446
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.0.1
>            Reporter: Zhongwei Zhu
>            Priority: Major
>
> Add config spark.executor.coresOverhead to request extra cores per executor. 
> This config will be helpful in below cases:
> Suppose for physical machines or vm, the memory/cpu ratio is 3GB/core. But we 
> run spark job, we want to have 6GB per task. If we request resource in such 
> way, there will be resource waste.
> If we request extra cores without affecting cores per executor for task 
> allocation, extra cores won't be wasted. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to