Ngone51 commented on a change in pull request #25047: [WIP][SPARK-27371][CORE] 
Support GPU-aware resources scheduling in Standalone
URL: https://github.com/apache/spark/pull/25047#discussion_r309295601
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/deploy/LocalSparkCluster.scala
 ##########
 @@ -64,7 +64,8 @@ class LocalSparkCluster(
     /* Start the Workers */
     for (workerNum <- 1 to numWorkers) {
       val workerEnv = Worker.startRpcEnvAndEndpoint(localHostname, 0, 0, 
coresPerWorker,
-        memoryPerWorker, masters, null, Some(workerNum), _conf)
+        memoryPerWorker, masters, null, Some(workerNum), _conf,
+        conf.get(config.Worker.SPARK_WORKER_RESOURCE_FILE))
 
 Review comment:
   > but then again a cluster admin could write a discovery script to make sure 
different workers get different resources. 
   
   I don't know whether it is easy for admin to do this within a script. But I 
agree that it would be better to have a separate config option if we do expect 
a admin would do this.
   
   So, let me add it later.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to