jiangxb1987 commented on a change in pull request #26047:
[SPARK-27492][DOC][FOLLOWUP] Update resource scheduling user docs
URL: https://github.com/apache/spark/pull/26047#discussion_r332224494
##########
File path: docs/configuration.md
##########
@@ -2639,7 +2639,7 @@ Also, you can modify or add configurations at runtime:
GPUs and other accelerators have been widely used for accelerating special
workloads, e.g.,
deep learning and signal processing. Spark now supports requesting and
scheduling generic resources, such as GPUs, with a few caveats. The current
implementation requires that the resource have addresses that can be allocated
by the scheduler. It requires your cluster manager to support and be properly
configured with the resources.
-There are configurations available to request resources for the driver:
<code>spark.driver.resource.{resourceName}.amount</code>, request resources for
the executor(s): <code>spark.executor.resource.{resourceName}.amount</code> and
specify the requirements for each task:
<code>spark.task.resource.{resourceName}.amount</code>. The
<code>spark.driver.resource.{resourceName}.discoveryScript</code> config is
required on YARN, Kubernetes and a client side Driver on Spark Standalone.
<code>spark.driver.executor.{resourceName}.discoveryScript</code> config is
required for YARN and Kubernetes. Kubernetes also requires
<code>spark.driver.resource.{resourceName}.vendor</code> and/or
<code>spark.executor.resource.{resourceName}.vendor</code>. See the config
descriptions above for more information on each.
+There are configurations available to request resources for the driver:
<code>spark.driver.resource.{resourceName}.amount</code>, request resources for
the executor(s): <code>spark.executor.resource.{resourceName}.amount</code> and
specify the requirements for each task:
<code>spark.task.resource.{resourceName}.amount</code>. The
<code>spark.driver.resource.{resourceName}.discoveryScript</code> config is
required on YARN, Kubernetes and a client side Driver on Spark Standalone.
<code>spark.executor.resource.{resourceName}.discoveryScript</code> config is
required for YARN and Kubernetes. Kubernetes also requires
<code>spark.driver.resource.{resourceName}.vendor</code> and/or
<code>spark.executor.resource.{resourceName}.vendor</code>. See the config
descriptions above for more information on each.
Review comment:
`spark.driver.executor.{resourceName}.discoveryScript` ->
`spark.executor.resource.{resourceName}.discoveryScript`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]