tgravescs commented on a change in pull request #27722: [SPARK-30969][CORE] 
Remove resource coordination support from Standalone
URL: https://github.com/apache/spark/pull/27722#discussion_r385356809
 
 

 ##########
 File path: docs/configuration.md
 ##########
 @@ -2782,5 +2762,4 @@ There are configurations available to request resources 
for the driver: <code>sp
 
 Spark will use the configurations specified to first request containers with 
the corresponding resources from the cluster manager. Once it gets the 
container, Spark launches an Executor in that container which will discover 
what resources the container has and the addresses associated with each 
resource. The Executor will register with the Driver and report back the 
resources available to that Executor. The Spark scheduler can then schedule 
tasks to each Executor and assign specific resource addresses based on the 
resource requirements the user specified. The user can see the resources 
assigned to a task using the <code>TaskContext.get().resources</code> api. On 
the driver, the user can see the resources assigned with the SparkContext 
<code>resources</code> call. It's then up to the user to use the 
assignedaddresses to do the processing they want or pass those into the ML/AI 
framework they are using.
 
-See your cluster manager specific page for requirements and details on each of 
- [YARN](running-on-yarn.html#resource-allocation-and-configuration-overview), 
[Kubernetes](running-on-kubernetes.html#resource-allocation-and-configuration-overview)
 and [Standalone 
Mode](spark-standalone.html#resource-allocation-and-configuration-overview). It 
is currently not available with Mesos or local mode. If using local-cluster 
mode see the Spark Standalone documentation but be aware only a single worker 
resources file or discovery script can be specified the is shared by all the 
Workers so you should enable resource coordination (see 
<code>spark.resources.coordinateResourcesInStandalone</code>).
-
+See your cluster manager specific page for requirements and details on each of 
- [YARN](running-on-yarn.html#resource-allocation-and-configuration-overview), 
[Kubernetes](running-on-kubernetes.html#resource-allocation-and-configuration-overview)
 and [Standalone 
Mode](spark-standalone.html#resource-allocation-and-configuration-overview). It 
is currently not available with Mesos or local mode. If using local-cluster 
mode please see the Spark Standalone documentation.
 
 Review comment:
   we should modify to say local-cluster mode not supported - I guess if we 
want to be very specific we could state say with multiple workers (but I'm fine 
just saying not supported in general).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to