liutang123 edited a comment on issue #24131: [SPARK-27192][Core] 
spark.task.cpus should be less or equal than spark.executor.cores
URL: https://github.com/apache/spark/pull/24131#issuecomment-475132217
 
 
   @jiangxb1987 Thanks for review.
   Sorry I didn't noticed the checking logic in `SparkConf`, but I think the 
checking logic is incomplete for local mode.
   For example:
   case 1:
   ```
   $SPARK_HOME/bin/spark-shell --master local[3] --conf spark.task.cpus=2 
--conf spark.executor.cores=1
   ```
   `local[3]` decides executor's core num is 3, but in #23290's logic, 
exception will be thrown.
   case 2:
   ```
   $SPARK_HOME/bin/spark-shell  --master local  --conf spark.task.cpus=2 --conf 
spark.executor.cores=3
   scala>sc.setLogLevel("INFO")
   scala>sc.parallelize(1 to 9).collect
   ```
   You can see spark will hang after log `INFO TaskSchedulerImpl: Adding task 
set 0.0 with 1 tasks.` but the checking logic in #23290 can not identify this 
case.
   So, I think we can check the spark.task.cpus before creating TaskScheduler.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to