This is an automated email from the ASF dual-hosted git repository.
yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 977f64f0904e [SPARK-46357] Replace incorrect documentation use of
setConf with conf.set
977f64f0904e is described below
commit 977f64f0904e46e72cbe5b2252f2657dde29c90c
Author: Nicholas Chammas <[email protected]>
AuthorDate: Thu Dec 14 11:07:20 2023 +0800
[SPARK-46357] Replace incorrect documentation use of setConf with conf.set
### What changes were proposed in this pull request?
`setConf` is a method on `SQLContext`. The docs perhaps used to refer to
`SQLContext`, but now they refer to `SparkSession`, which does not have this
method.
This PR updates the docs to use the appropriate method for making
configuration settings against `SparkSession`.
### Why are the changes needed?
The current documentation is incorrect.
### Does this PR introduce _any_ user-facing change?
Yes.
### How was this patch tested?
No testing.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #44290 from nchammas/SPARK-46357-setConf-confset.
Authored-by: Nicholas Chammas <[email protected]>
Signed-off-by: Kent Yao <[email protected]>
---
docs/sql-data-sources-avro.md | 2 +-
docs/sql-data-sources-parquet.md | 2 +-
docs/sql-performance-tuning.md | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/docs/sql-data-sources-avro.md b/docs/sql-data-sources-avro.md
index 82f876eae2c5..898afe9de87f 100644
--- a/docs/sql-data-sources-avro.md
+++ b/docs/sql-data-sources-avro.md
@@ -330,7 +330,7 @@ Data source options of Avro can be set via:
</tr></table>
## Configuration
-Configuration of Avro can be done using the `setConf` method on SparkSession
or by running `SET key=value` commands using SQL.
+Configuration of Avro can be done via `spark.conf.set` or by running `SET
key=value` commands using SQL.
<table>
<thead><tr><th><b>Property
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Since
Version</b></th></tr></thead>
<tr>
diff --git a/docs/sql-data-sources-parquet.md b/docs/sql-data-sources-parquet.md
index 20f6d556cdf7..7d8034321481 100644
--- a/docs/sql-data-sources-parquet.md
+++ b/docs/sql-data-sources-parquet.md
@@ -431,7 +431,7 @@ Other generic options can be found in <a
href="https://spark.apache.org/docs/lat
### Configuration
-Configuration of Parquet can be done using the `setConf` method on
`SparkSession` or by running
+Configuration of Parquet can be done via `spark.conf.set` or by running
`SET key=value` commands using SQL.
<table>
diff --git a/docs/sql-performance-tuning.md b/docs/sql-performance-tuning.md
index 2dec65cc553e..4ede18d1938b 100644
--- a/docs/sql-performance-tuning.md
+++ b/docs/sql-performance-tuning.md
@@ -31,7 +31,7 @@ Spark SQL can cache tables using an in-memory columnar format
by calling `spark.
Then Spark SQL will scan only required columns and will automatically tune
compression to minimize
memory usage and GC pressure. You can call
`spark.catalog.uncacheTable("tableName")` or `dataFrame.unpersist()` to remove
the table from memory.
-Configuration of in-memory caching can be done using the `setConf` method on
`SparkSession` or by running
+Configuration of in-memory caching can be done via `spark.conf.set` or by
running
`SET key=value` commands using SQL.
<table>
@@ -297,7 +297,7 @@ This feature coalesces the post shuffle partitions based on
the map output stati
</tr>
</table>
-### Spliting skewed shuffle partitions
+### Splitting skewed shuffle partitions
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since
Version</th></tr></thead>
<tr>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]