yaooqinn commented on code in PR #44290:
URL: https://github.com/apache/spark/pull/44290#discussion_r1423771440


##########
docs/sql-performance-tuning.md:
##########
@@ -31,7 +31,7 @@ Spark SQL can cache tables using an in-memory columnar format 
by calling `spark.
 Then Spark SQL will scan only required columns and will automatically tune 
compression to minimize
 memory usage and GC pressure. You can call 
`spark.catalog.uncacheTable("tableName")` or `dataFrame.unpersist()` to remove 
the table from memory.
 
-Configuration of in-memory caching can be done using the `setConf` method on 
`SparkSession` or by running
+Configuration of in-memory caching can be done via `SparkSession.conf.set` or 
by running

Review Comment:
   nit: Like lines 30 and 32, use `spark.conf.set`?



##########
docs/sql-data-sources-parquet.md:
##########
@@ -431,7 +431,7 @@ Other generic options can be found in <a 
href="https://spark.apache.org/docs/lat
 
 ### Configuration
 
-Configuration of Parquet can be done using the `setConf` method on 
`SparkSession` or by running
+Configuration of Parquet can be done via `SparkSession.conf.set` or by running

Review Comment:
   ditto



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to