mridulm commented on a change in pull request #27773: [SPARK-29154][CORE] 
Update Spark scheduler for stage level scheduling
URL: https://github.com/apache/spark/pull/27773#discussion_r389521856
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/resource/ResourceUtils.scala
 ##########
 @@ -392,11 +392,13 @@ private[spark] object ResourceUtils extends Logging {
       s"${resourceRequest.id.resourceName}")
   }
 
-  def validateTaskCpusLargeEnough(execCores: Int, taskCpus: Int): Boolean = {
+  def validateTaskCpusLargeEnough(sparkConf: SparkConf, execCores: Int, 
taskCpus: Int): Boolean = {
     // Number of cores per executor must meet at least one task requirement.
-    if (execCores < taskCpus) {
-      throw new SparkException(s"The number of cores per executor 
(=$execCores) has to be >= " +
-        s"the number of cpus per task = $taskCpus.")
+    if (!sparkConf.get(TASKSET_MANAGER_SPECULATION_TESTING)) {
 
 Review comment:
   Why is this flag guarded ? Shouldn't we not always throw this exception if 
required task cores > executor cores ? This should be happening only for 
default profile ? (via CPUS_PER_TASK).
   Can non-default resource profiles also exhibit this ?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to