This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 6aac6428aae [MINOR] Fix some typos in QueryExecution and
TaskSchedulerImpl
6aac6428aae is described below
commit 6aac6428aae89915c5634b6a9659aff3d450f173
Author: Silly Carbon <[email protected]>
AuthorDate: Sat Dec 31 10:29:50 2022 +0900
[MINOR] Fix some typos in QueryExecution and TaskSchedulerImpl
### What changes were proposed in this pull request?
Fix some typos in `QueryExecution` and `TaskSchedulerImpl`.
### Why are the changes needed?
The typos confuse users.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
No need to test.
Closes #39308 from silly-carbon/fix-typos.
Authored-by: Silly Carbon <[email protected]>
Signed-off-by: Hyukjin Kwon <[email protected]>
---
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala | 2 +-
docs/running-on-yarn.md | 2 +-
.../src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git
a/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
index 4580ec53289..91b0c983e4a 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
@@ -472,7 +472,7 @@ private[spark] class TaskSchedulerImpl(
val taskCpus = ResourceProfile.getTaskCpusOrDefaultForProfile(taskSetProf,
conf)
// check if the ResourceProfile has cpus first since that is common case
if (availCpus < taskCpus) return None
- // only look at the resource other then cpus
+ // only look at the resource other than cpus
val tsResources = taskSetProf.getCustomTaskResources()
if (tsResources.isEmpty) return Some(Map.empty)
val localTaskReqAssign = HashMap[String, ResourceInformation]()
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 4112c71cdf9..35aaece15c5 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -730,7 +730,7 @@ Please make sure to have read the Custom Resource
Scheduling and Configuration O
YARN needs to be configured to support any resources the user wants to use
with Spark. Resource scheduling on YARN was added in YARN 3.1.0. See the YARN
documentation for more information on configuring resources and properly
setting up isolation. Ideally the resources are setup isolated so that an
executor can only see the resources it was allocated. If you do not have
isolation enabled, the user is responsible for creating a discovery script that
ensures the resource is not shared betw [...]
YARN supports user defined resource types but has built in types for GPU
(<code>yarn.io/gpu</code>) and FPGA (<code>yarn.io/fpga</code>). For that
reason, if you are using either of those resources, Spark can translate your
request for spark resources into YARN resources and you only have to specify
the <code>spark.{driver/executor}.resource.</code> configs. Note, if you are
using a custom resource type for GPUs or FPGAs with YARN you can change the
Spark mapping using <code>spark.yarn.r [...]
- If you are using a resource other then FPGA or GPU, the user is responsible
for specifying the configs for both YARN
(<code>spark.yarn.{driver/executor}.resource.</code>) and Spark
(<code>spark.{driver/executor}.resource.</code>).
+ If you are using a resource other than FPGA or GPU, the user is responsible
for specifying the configs for both YARN
(<code>spark.yarn.{driver/executor}.resource.</code>) and Spark
(<code>spark.{driver/executor}.resource.</code>).
For example, the user wants to request 2 GPUs for each executor. The user can
just specify <code>spark.executor.resource.gpu.amount=2</code> and Spark will
handle requesting <code>yarn.io/gpu</code> resource type from YARN.
diff --git
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
index 796ec41ab51..362615770a3 100644
---
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
+++
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
@@ -411,7 +411,7 @@ object QueryExecution {
/**
* Construct a sequence of rules that are used to prepare a planned
[[SparkPlan]] for execution.
- * These rules will make sure subqueries are planned, make use the data
partitioning and ordering
+ * These rules will make sure subqueries are planned, make sure the data
partitioning and ordering
* are correct, insert whole stage code gen, and try to reduce the work done
by reusing exchanges
* and subqueries.
*/
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]