mridulm commented on a change in pull request #26682: [SPARK-29306][CORE] Stage 
Level Sched: Executors need to track what ResourceProfile they are created with 
URL: https://github.com/apache/spark/pull/26682#discussion_r371505003
 
 

 ##########
 File path: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
 ##########
 @@ -455,7 +456,8 @@ private[spark] class ApplicationMaster(
       val executorMemory = _sparkConf.get(EXECUTOR_MEMORY).toInt
       val executorCores = _sparkConf.get(EXECUTOR_CORES)
       val dummyRunner = new ExecutorRunnable(None, yarnConf, _sparkConf, 
driverUrl, "<executorId>",
-        "<hostname>", executorMemory, executorCores, appId, securityMgr, 
localResources)
+        "<hostname>", executorMemory, executorCores, appId, securityMgr, 
localResources,
+        ResourceProfile.DEFAULT_RESOURCE_PROFILE_ID)
 
 Review comment:
   I meant driver. Yes, this will need to be specified a-priori at launch time 
and cant be dynamically changed. Makes more sense in cluster mode (in client 
mode, it is the local node of user/notebook server typically).
   
   By non-parallelizable code, I meant what driver runs as part of computation 
which is not executed in executors - for example, error computation in ML loop, 
etc.
   
   spark.driver.resource.*, applied  to AM in cluster mode, is exactly what I 
was looking for -  thanks.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to