Ngone51 commented on a change in pull request #25047: [WIP][SPARK-27371][CORE]
Support GPU-aware resources scheduling in Standalone
URL: https://github.com/apache/spark/pull/25047#discussion_r307993683
##########
File path: core/src/main/scala/org/apache/spark/deploy/LocalSparkCluster.scala
##########
@@ -64,7 +64,8 @@ class LocalSparkCluster(
/* Start the Workers */
for (workerNum <- 1 to numWorkers) {
val workerEnv = Worker.startRpcEnvAndEndpoint(localHostname, 0, 0,
coresPerWorker,
- memoryPerWorker, masters, null, Some(workerNum), _conf)
+ memoryPerWorker, masters, null, Some(workerNum), _conf,
+ conf.get(config.Worker.SPARK_WORKER_RESOURCE_FILE))
Review comment:
Using the same resources file for different workers (whether local cluster
or real cluster) really doesn't make sense if resources file is intent to have
the cluster admin config with different resources. We can just pass None and
only use discovery script for LocalSparkCluster since it is only used for test
purpose.
And for a real cluster, how about this:
When user configs a resources file(even a discovery script configured
concurrently), we just acquire resources from it and do not go through
acquireResources() any more with the assuming that user has already configured
different resources across workers. If not, then, we use discovery script and
calls acquireResources() to make sure we get different resources compare to
others.
And we don't introduce a new configuration here, but just document more
specific and rely on those file existence to decide which way we wanna go to.
WDYT ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]