srowen commented on a change in pull request #24131: [SPARK-27192][Core] 
spark.task.cpus should be less or equal than spark.executor.cores
URL: https://github.com/apache/spark/pull/24131#discussion_r267371743
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/SparkContext.scala
 ##########
 @@ -2665,8 +2665,19 @@ object SparkContext extends Logging {
     // When running locally, don't try to re-execute tasks on failure.
     val MAX_LOCAL_TASK_FAILURES = 1
 
+    val cpusPerTask = sc.conf.get(CPUS_PER_TASK)
+
+    def checkClusterExecutorCores(): Unit = {
 
 Review comment:
   Can you just modify this little utility method to take a "cores" parameter 
and then use it in all the cases below? it can default to 
`sc.conf.get(EXECUTOR_CORES)`, and then below you can set it to 1 for the local 
case, for example.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to