This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.4 by this push:
new b1ea200f6078 [MINOR][DOCS] Make the link of spark properties with YARN
more accurate
b1ea200f6078 is described below
commit b1ea200f6078158586a4bc13b39511adb71e7a57
Author: beliefer <[email protected]>
AuthorDate: Wed Apr 10 20:33:43 2024 -0700
[MINOR][DOCS] Make the link of spark properties with YARN more accurate
### What changes were proposed in this pull request?
This PR propose to make the link of spark properties with YARN more
accurate.
### Why are the changes needed?
Currently, the link of `YARN Spark Properties` is just the page of
`running-on-yarn.html`.
We should add the anchor point.
### Does this PR introduce _any_ user-facing change?
'Yes'.
More convenient for readers to read.
### How was this patch tested?
N/A
### Was this patch authored or co-authored using generative AI tooling?
'No'.
Closes #45994 from beliefer/accurate-yarn-link.
Authored-by: beliefer <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit aca3d1025e2d85c02737456bfb01163c87ca3394)
Signed-off-by: Dongjoon Hyun <[email protected]>
---
docs/job-scheduling.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index 8694ee82e1b8..9639054f6129 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -57,7 +57,7 @@ Resource allocation can be configured as follows, based on
the cluster type:
on the cluster (`spark.executor.instances` as configuration property), while
`--executor-memory`
(`spark.executor.memory` configuration property) and `--executor-cores`
(`spark.executor.cores` configuration
property) control the resources per executor. For more information, see the
- [YARN Spark Properties](running-on-yarn.html).
+ [YARN Spark Properties](running-on-yarn.html#spark-properties).
A second option available on Mesos is _dynamic sharing_ of CPU cores. In this
mode, each Spark application
still has a fixed and independent memory allocation (set by
`spark.executor.memory`), but when the
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]