tgravescs commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r903859600
##########
core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala:
##########
@@ -336,9 +340,23 @@ object ResourceProfile extends Logging {
private def getDefaultExecutorResources(conf: SparkConf): Map[String,
ExecutorResourceRequest] = {
val ereqs = new ExecutorResourceRequests()
- val cores = conf.get(EXECUTOR_CORES)
- ereqs.cores(cores)
- val memory = conf.get(EXECUTOR_MEMORY)
+
+ val isStandalone =
conf.getOption("spark.master").exists(_.startsWith("spark://"))
+ val isLocalCluster =
conf.getOption("spark.master").exists(_.startsWith("local-cluster"))
+ // By default, standalone executors take all available cores, do not have
a specific value.
+ val cores = if (isStandalone || isLocalCluster) {
+ conf.getOption(EXECUTOR_CORES.key).map(_.toInt)
+ } else {
+ Some(conf.get(EXECUTOR_CORES))
+ }
+ cores.foreach(ereqs.cores)
+
+ // Setting all resources here, cluster managers will take the resources
they respect.
Review Comment:
I guess so, I don't really see the purpose of the comment, I would expect
all resources to be in the resourceProfile. What the cluster manager does with
those isn't really the concern of this class. If there was something that
shouldn't be set for some cluster manager, I can see commenting on that, but I
would hope we would just update the cluster manager to do the right thing. I
would just remove the comment
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]