tgravescs commented on a change in pull request #25047: 
[WIP][SPARK-27371][CORE] Support GPU-aware resources scheduling in Standalone
URL: https://github.com/apache/spark/pull/25047#discussion_r308301258
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/deploy/LocalSparkCluster.scala
 ##########
 @@ -64,7 +64,8 @@ class LocalSparkCluster(
     /* Start the Workers */
     for (workerNum <- 1 to numWorkers) {
       val workerEnv = Worker.startRpcEnvAndEndpoint(localHostname, 0, 0, 
coresPerWorker,
-        memoryPerWorker, masters, null, Some(workerNum), _conf)
+        memoryPerWorker, masters, null, Some(workerNum), _conf,
+        conf.get(config.Worker.SPARK_WORKER_RESOURCE_FILE))
 
 Review comment:
   we could do that, but then again a cluster admin could write a discovery 
script to make sure different workers get different resources. Then they don't 
have to manually create the resourcesFile.    I also think there are some weird 
cases like you mention where you have both resources file and discovery script 
that wouldn't be obvious to the user what happens. one resource they separated 
but then another the discovery script didn't.  I realize these are corner cases 
but with a config it would be obvious exactly what is going to happen.
   
   If we don't do the config then I think we should leave it as is and just 
document that the resourcesfile/discovery script for Workers/Driver in 
Standalone mode needs to have all the node resources or they need to configure 
a different resources Dir for each.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to