tgravescs commented on a change in pull request #26682: [SPARK-29306][CORE]
Stage Level Sched: Executors need to track what ResourceProfile they are
created with
URL: https://github.com/apache/spark/pull/26682#discussion_r371356665
##########
File path:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
##########
@@ -455,7 +456,8 @@ private[spark] class ApplicationMaster(
val executorMemory = _sparkConf.get(EXECUTOR_MEMORY).toInt
val executorCores = _sparkConf.get(EXECUTOR_CORES)
val dummyRunner = new ExecutorRunnable(None, yarnConf, _sparkConf,
driverUrl, "<executorId>",
- "<hostname>", executorMemory, executorCores, appId, securityMgr,
localResources)
+ "<hostname>", executorMemory, executorCores, appId, securityMgr,
localResources,
+ ResourceProfile.DEFAULT_RESOURCE_PROFILE_ID)
Review comment:
so I do not cover allowing the master to change its resource profile as
currently that would require it to be killed and restarted.
The driver can request extra resources like GPUs, fpgas, etc via the configs
spark.driver.resource.*.
Can you expand on the non-parallelizable code use case?
When you say master, do you mean the Driver/ApplicationMaster (same process
in cluster mode), or in client mode application master can be separate process,
or both?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]