tgravescs commented on a change in pull request #24406: [SPARK-27024] Executor
interface for cluster managers to support GPU and other resources
URL: https://github.com/apache/spark/pull/24406#discussion_r283022273
##########
File path:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
##########
@@ -71,6 +82,99 @@ private[spark] class CoarseGrainedExecutorBackend(
}(ThreadUtils.sameThread)
}
+ // Check that the executor resources at startup will satisfy the user
specified task
+ // requirements (spark.task.resource.*) and that they match the executor
configs
+ // specified by the user (spark.executor.resource.*) to catch mismatches
between what
+ // the user requested and what resource manager gave or what the discovery
script found.
+ private def checkExecResourcesMeetTaskRequirements(
Review comment:
I don't completely follow what you are asking for here. We are comparing 3
things
spark.task.resource.* -> count with spark.executor.resource.* -> count and
with the actual found in the script or pass in which is a Map[resourceName,
ResourceInformation]. You can't have the resourceprefix on the type else it
won't compare properly to the Map[resourceName]
I can certainly make it more generic to handle both executor and driver and
I made some code changes to go that way but I would prefer to wait til the jira
that implements the Driver side to finish that to make sure we don't need
anything else. This function will likely have to move somewhere anyway.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]