[ 
https://issues.apache.org/jira/browse/SPARK-26340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16717936#comment-16717936
 ] 

ASF GitHub Bot commented on SPARK-26340:
----------------------------------------

srowen commented on issue #23290: [SPARK-26340][Core] Ensure cores per executor 
is greater than cpu per task
URL: https://github.com/apache/spark/pull/23290#issuecomment-446348399
 
 
   Yes, it shouldn't be in TaskSchedulerImpl. I think the check is OK as 
there's no good reason you'd allow 1-core executors when all tasks need 2 
cores. My only concern is that spark.executor.cores defaults to to 1 in YARN 
mode, but probably best to fail fast if that's the case.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Ensure cores per executor is greater than cpu per task
> ------------------------------------------------------
>
>                 Key: SPARK-26340
>                 URL: https://issues.apache.org/jira/browse/SPARK-26340
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.2.2, 2.3.2
>            Reporter: Nicolas Fraison
>            Priority: Minor
>
> No check is performed to ensure spark.task.cpus is lower then 
> spark.executor.cores. Which can lead to jobs not able to assign tasks without 
> any understandable issues
> The check is only performed in the case of dynamic allocation usage in 
> ExecutorAllocationManager
> Adding the check in TaskSchedulerImpl ensure that an issue is thrown to the 
> driver



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to