This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.6
in repository https://gitbox.apache.org/repos/asf/incubator-paimon.git


The following commit(s) were added to refs/heads/release-0.6 by this push:
     new b9693332e [doc] Add PaimonSparkSessionExtensions to spark
b9693332e is described below

commit b9693332edf71a04f77b08021ff465cdab458cc1
Author: Jingsong <[email protected]>
AuthorDate: Tue Dec 19 17:03:46 2023 +0800

    [doc] Add PaimonSparkSessionExtensions to spark
---
 docs/content/engines/spark.md | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/docs/content/engines/spark.md b/docs/content/engines/spark.md
index 9964aa320..4f3768a0c 100644
--- a/docs/content/engines/spark.md
+++ b/docs/content/engines/spark.md
@@ -99,7 +99,8 @@ When starting `spark-sql`, use the following command to 
register Paimon’s Spar
 ```bash
 spark-sql ... \
     --conf spark.sql.catalog.paimon=org.apache.paimon.spark.SparkCatalog \
-    --conf spark.sql.catalog.paimon.warehouse=file:/tmp/paimon
+    --conf spark.sql.catalog.paimon.warehouse=file:/tmp/paimon \
+    --conf 
spark.sql.extensions=org.apache.paimon.spark.extensions.PaimonSparkSessionExtensions
 ```
 
 Catalogs are configured using properties under 
spark.sql.catalog.(catalog_name). In above case, 'paimon' is the
@@ -127,7 +128,8 @@ Hive conf from Spark session, you just need to configure 
Spark's Hive conf.
 
 ```bash
 spark-sql ... \
-    --conf 
spark.sql.catalog.spark_catalog=org.apache.paimon.spark.SparkGenericCatalog
+    --conf 
spark.sql.catalog.spark_catalog=org.apache.paimon.spark.SparkGenericCatalog \
+    --conf 
spark.sql.extensions=org.apache.paimon.spark.extensions.PaimonSparkSessionExtensions
 ```
 
 Using `SparkGenericCatalog`, you can use Paimon tables in this Catalog or 
non-Paimon tables such as Spark's csv,

Reply via email to