This is an automated email from the ASF dual-hosted git repository.

chengpan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/kyuubi.git


The following commit(s) were added to refs/heads/master by this push:
     new 6da0e62baf [KYUUBI #7036] [DOCS] Improve docs for 
kyuubi-extension-spark-jdbc-dialect
6da0e62baf is described below

commit 6da0e62baf1f1b5a44365f68f778a35e65650be2
Author: Cheng Pan <cheng...@apache.org>
AuthorDate: Wed Apr 23 11:09:29 2025 +0800

    [KYUUBI #7036] [DOCS] Improve docs for kyuubi-extension-spark-jdbc-dialect
    
    ### Why are the changes needed?
    
    This PR removes the page 
https://kyuubi.readthedocs.io/en/v1.10.1/client/python/pyspark.html and merges 
the most content into 
https://kyuubi.readthedocs.io/en/v1.10.1/extensions/engines/spark/jdbc-dialect.html,
 some original content of the latter is also modified.
    
    The current docs are misleading, I got asked several times by users why 
they follow the [Kyuubi PySpark 
docs](https://kyuubi.readthedocs.io/en/v1.10.1/client/python/pyspark.html) to 
access data stored in Hive warehouse is too slow.
    
    Actually, accessing HiveServer2/STS from Spark JDBC data source is 
discouraged by the Spark community, see 
[SPARK-47482](https://github.com/apache/spark/pull/45609), even though it's 
technical feasible.
    
    ### How was this patch tested?
    
    It's a docs-only change, review is required.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #7036 from pan3793/jdbc-ds-docs.
    
    Closes #7036
    
    c00ce0706 [Cheng Pan] style
    f2676bd23 [Cheng Pan] [DOCS] Improve docs for 
kyuubi-extension-spark-jdbc-dialect
    
    Authored-by: Cheng Pan <cheng...@apache.org>
    Signed-off-by: Cheng Pan <cheng...@apache.org>
---
 docs/client/python/index.rst                  |   1 -
 docs/client/python/pyspark.md                 | 133 --------------------------
 docs/extensions/engines/spark/jdbc-dialect.md | 130 ++++++++++++++++++++++---
 3 files changed, 117 insertions(+), 147 deletions(-)

diff --git a/docs/client/python/index.rst b/docs/client/python/index.rst
index 5e8ae4228a..8d310ebfd8 100644
--- a/docs/client/python/index.rst
+++ b/docs/client/python/index.rst
@@ -21,5 +21,4 @@ Python
     :maxdepth: 2
 
     pyhive
-    pyspark
     jaydebeapi
diff --git a/docs/client/python/pyspark.md b/docs/client/python/pyspark.md
deleted file mode 100644
index b4fcb08e73..0000000000
--- a/docs/client/python/pyspark.md
+++ /dev/null
@@ -1,133 +0,0 @@
-<!--
-- Licensed to the Apache Software Foundation (ASF) under one or more
-- contributor license agreements.  See the NOTICE file distributed with
-- this work for additional information regarding copyright ownership.
-- The ASF licenses this file to You under the Apache License, Version 2.0
-- (the "License"); you may not use this file except in compliance with
-- the License.  You may obtain a copy of the License at
--
--   http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
--->
-
-# PySpark
-
-[PySpark](https://spark.apache.org/docs/latest/api/python/index.html) is an 
interface for Apache Spark in Python. Kyuubi can be used as JDBC source in 
PySpark.
-
-## Requirements
-
-PySpark works with Python 3.7 and above.
-
-Install PySpark with Spark SQL and optional pandas support on Spark using PyPI 
as follows:
-
-```shell
-pip install pyspark 'pyspark[sql]' 'pyspark[pandas_on_spark]'
-```
-
-For installation using Conda or manually downloading, please refer to [PySpark 
installation](https://spark.apache.org/docs/latest/api/python/getting_started/install.html).
-
-## Preparation
-
-### Prepare JDBC driver
-
-Prepare JDBC driver jar file. Supported Hive compatible JDBC Driver as below:
-
-|                        Driver                        |            Driver 
Class Name            |                                                         
                         Remarks                                                
                                   |
-|------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Kyuubi Hive Driver ([doc](../jdbc/kyuubi_jdbc.html)) | 
org.apache.kyuubi.jdbc.KyuubiHiveDriver | Compile for the driver on master 
branch, as [KYUUBI #3484](https://github.com/apache/kyuubi/pull/3485) required 
by Spark JDBC source not yet included in released version. |
-| Hive Driver ([doc](../jdbc/hive_jdbc.html))          | 
org.apache.hive.jdbc.HiveDriver         |
-
-Refer to docs of the driver and prepare the JDBC driver jar file.
-
-### Prepare JDBC Hive Dialect extension
-
-Hive Dialect support is required by Spark for wrapping SQL correctly and 
sending it to the JDBC driver. Kyuubi provides a JDBC dialect extension with 
auto-registered Hive Dialect support for Spark. Follow the instructions in 
[Hive Dialect Support](../../extensions/engines/spark/jdbc-dialect.html) to 
prepare the plugin jar file `kyuubi-extension-spark-jdbc-dialect_-*.jar`.
-
-### Including jars of JDBC driver and Hive Dialect extension
-
-Choose one of the following ways to include jar files in Spark.
-
-- Put the jar file of JDBC driver and Hive Dialect to `$SPARK_HOME/jars` 
directory to make it visible for the classpath of PySpark. And adding 
`spark.sql.extensions = 
org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension` to 
`$SPARK_HOME/conf/spark_defaults.conf.`
-
-- With spark's start shell, include the JDBC driver when submitting the 
application with `--packages`, and the Hive Dialect plugins with `--jars`
-
-```
-$SPARK_HOME/bin/pyspark --py-files PY_FILES \
---packages org.apache.hive:hive-jdbc:x.y.z \
---jars /path/kyuubi-extension-spark-jdbc-dialect_-*.jar 
-```
-
-- Setting jars and config with SparkSession builder
-
-```python
-from pyspark.sql import SparkSession
-
-spark = SparkSession.builder \
-        .config("spark.jars", 
"/path/hive-jdbc-x.y.z.jar,/path/kyuubi-extension-spark-jdbc-dialect_-*.jar") \
-        .config("spark.sql.extensions", 
"org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension") \
-        .getOrCreate()
-```
-
-## Usage
-
-For further information about PySpark JDBC usage and options, please refer to 
Spark's [JDBC To Other 
Databases](https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html).
-
-### Using as JDBC Datasource programmingly
-
-```python
-# Loading data from Kyuubi via HiveDriver as JDBC datasource
-jdbcDF = spark.read \
-  .format("jdbc") \
-  .options(driver="org.apache.hive.jdbc.HiveDriver",
-           url="jdbc:hive2://kyuubi_server_ip:port",
-           user="user",
-           password="password",
-           query="select * from testdb.src_table"
-           ) \
-  .load()
-```
-
-### Using as JDBC Datasource table with SQL
-
-From Spark 3.2.0, [`CREATE DATASOURCE 
TABLE`](https://spark.apache.org/docs/latest/sql-ref-syntax-ddl-create-table-datasource.html)
 is supported to create jdbc source with SQL.
-
-```python
-# create JDBC Datasource table with DDL
-spark.sql("""CREATE TABLE kyuubi_table USING JDBC
-OPTIONS (
-    driver='org.apache.hive.jdbc.HiveDriver',
-    url='jdbc:hive2://kyuubi_server_ip:port',
-    user='user',
-    password='password',
-    dbtable='testdb.some_table'
-)""")
-
-# read data to dataframe
-jdbcDF = spark.sql("SELECT * FROM kyuubi_table")
-
-# write data from dataframe in overwrite mode
-df.writeTo("kyuubi_table").overwrite
-
-# write data from query
-spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table")
-```
-
-### Use PySpark with Pandas
-
-From PySpark 3.2.0, PySpark supports pandas API on Spark which allows you to 
scale your pandas workload out.
-
-Pandas-on-Spark DataFrame and Spark DataFrame are virtually interchangeable. 
More instructions in [From/to pandas and PySpark 
DataFrames](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/pandas_pyspark.html#pyspark).
-
-```python
-import pyspark.pandas as ps
-
-psdf = ps.range(10)
-sdf = psdf.to_spark().filter("id > 5")
-sdf.show()
-```
-
diff --git a/docs/extensions/engines/spark/jdbc-dialect.md 
b/docs/extensions/engines/spark/jdbc-dialect.md
index a04a6df454..30f01e9f00 100644
--- a/docs/extensions/engines/spark/jdbc-dialect.md
+++ b/docs/extensions/engines/spark/jdbc-dialect.md
@@ -15,27 +15,131 @@
 - limitations under the License.
 -->
 
-# Hive Dialect Support
+# Hive JDBC Data Source Dialect
 
-Hive Dialect plugin aims to provide Hive Dialect support to Spark's JDBC 
source.
+Hive JDBC Data Source dialect plugin aims to provide Hive Dialect support to 
[Spark's JDBC Data 
Source](https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html).
 It will auto registered to Spark and applied to JDBC sources with url prefix 
of `jdbc:hive2://` or `jdbc:kyuubi://`.
 
-Hive Dialect helps to solve failures access Kyuubi. It fails and unexpected 
results when querying data from Kyuubi as JDBC source with Hive JDBC Driver or 
Kyuubi Hive JDBC Driver in Spark, as Spark JDBC provides no Hive Dialect 
support out of box and quoting columns and other identifiers in ANSI as 
"table.column" rather than in HiveSQL style as \`table\`.\`column\`.
+Hive Dialect helps to solve failures access Kyuubi. It fails and unexpected 
results when querying data from Kyuubi as
+JDBC data source with Hive JDBC Driver or Kyuubi Hive JDBC Driver in Spark, as 
Spark JDBC provides no Hive Dialect support
+out of box and quoting columns and other identifiers in ANSI as "table.column" 
rather than in HiveSQL style as
+\`table\`.\`column\`.
+
+Notes: this is an inefficient way to access data stored in Hive warehouse, you 
can see more discussions at 
[SPARK-47482](https://github.com/apache/spark/pull/45609).
 
 ## Features
 
-- quote identifier in Hive SQL style
+- Quote identifier in Hive SQL style
+
+  e.g. Quote `table.column` in \`table\`.\`column\`
+
+- Adapt to Hive data type definitions
+
+  Reference: 
https://cwiki.apache.org/confluence/display/hive/languagemanual+types
+
+## Preparation
+
+### Prepare JDBC driver
+
+Prepare JDBC driver jar file. Supported Hive compatible JDBC Driver as below:
+
+|                          Driver                           |            
Driver Class Name            |                                                 
Remarks                                                  |
+|-----------------------------------------------------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------|
+| Kyuubi Hive JDBC Driver ([doc](../jdbc/kyuubi_jdbc.html)) | 
org.apache.kyuubi.jdbc.KyuubiHiveDriver | Use v1.6.1 or later versions, which 
includes [KYUUBI #3484](https://github.com/apache/kyuubi/pull/3485). |
+| Hive JDBC Driver ([doc](../jdbc/hive_jdbc.html))          | 
org.apache.hive.jdbc.HiveDriver         | The Hive JDBC driver is already 
included in official Spark binary distribution.                          |
+
+Refer to docs of the driver and prepare the JDBC driver jar file.
+
+### Prepare JDBC Hive Dialect extension
+
+Prepare the plugin jar file `kyuubi-extension-spark-jdbc-dialect_-*.jar`.
+
+Get the Kyuubi Hive Dialect Extension jar from Maven Central
+
+```
+<dependency>
+    <groupId>org.apache.kyuubi</groupId>
+    <artifactId>kyuubi-extension-spark-jdbc-dialect_2.12</artifactId>
+    <version>{latest-version}</version>
+</dependency>
+```
+
+Or, compile the extension by executing
+
+```
+build/mvn clean package -pl :kyuubi-extension-spark-jdbc-dialect_2.12 
-DskipTests
+```
 
-  eg. Quote `table.column` in \`table\`.\`column\`
+then get the extension jar under 
`extensions/spark/kyuubi-extension-spark-jdbc-dialect/target`.
+
+If you like, you can compile the extension jar with the corresponding Maven's 
profile on you compile command,
+i.e. you can get extension jar for Spark 3.5 by compiling with `-Pspark-3.5`
+
+### Including jars of JDBC driver and Hive Dialect extension
+
+Choose one of the following ways to include jar files in Spark.
+
+- Put the jar file of JDBC driver and Hive Dialect to `$SPARK_HOME/jars` 
directory to make it visible for all Spark applications. And adding 
`spark.sql.extensions = 
org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension` to 
`$SPARK_HOME/conf/spark_defaults.conf.`
+
+- With each `spark-submit`(or `spark-sql`, `pyspark` etc.) commands, include 
the JDBC driver when submitting the application with `--packages`, and the Hive 
Dialect plugins with `--jars`
+
+```
+$SPARK_HOME/bin/spark-submit \
+  --packages org.apache.hive:hive-jdbc:x.y.z \
+  --jars /path/kyuubi-extension-spark-jdbc-dialect_-*.jar \
+  ...
+```
+
+- Setting jars and config with SparkSession builder
+
+```
+val spark = SparkSession.builder
+    .config("spark.jars", 
"/path/hive-jdbc-x.y.z.jar,/path/kyuubi-extension-spark-jdbc-dialect_-*.jar")
+    .config("spark.sql.extensions", 
"org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension")
+    .getOrCreate()
+```
 
 ## Usage
 
-1. Get the Kyuubi Hive Dialect Extension jar
-   1. compile the extension by executing `build/mvn clean package -pl 
:kyuubi-extension-spark-jdbc-dialect_2.12 -DskipTests`
-   2. get the extension jar under 
`extensions/spark/kyuubi-extension-spark-jdbc-dialect/target`
-   3. If you like, you can compile the extension jar with the corresponding 
Maven's profile on you compile command, i.e. you can get extension jar for 
Spark 3.5 by compiling with `-Pspark-3.5`
-2. Put the Kyuubi Hive Dialect Extension jar 
`kyuubi-extension-spark-jdbc-dialect_-*.jar` into `$SPARK_HOME/jars`
-3. Enable `KyuubiSparkJdbcDialectExtension`, by setting 
`spark.sql.extensions=org.apache.spark.sql.dialect.KyuubiSparkJdbcDialectExtension`,
 i.e.
-   - add a config into `$SPARK_HOME/conf/spark-defaults.conf`
-   - or add setting config in SparkSession builder
+### Using as JDBC Datasource programmingly
+
+```
+# Loading data from Kyuubi via HiveDriver as JDBC datasource
+val jdbcDF = spark.read
+    .format("jdbc")
+    .option("driver", "org.apache.hive.jdbc.HiveDriver")
+    .option("url", "jdbc:hive2://kyuubi_server_ip:port")
+    .option("dbtable", "schema.tablename")
+    .option("user", "username")
+    .option("password", "password")
+    .option("query", "select * from testdb.src_table")
+    .load()
+```
+
+### Using as JDBC Datasource table with SQL
+
+From Spark 3.2.0, [`CREATE DATASOURCE 
TABLE`](https://spark.apache.org/docs/latest/sql-ref-syntax-ddl-create-table-datasource.html)
+is supported to create jdbc source with SQL.
+
+```sql
+-- create JDBC data source table
+CREATE TABLE kyuubi_table
+USING JDBC
+OPTIONS (
+    driver='org.apache.hive.jdbc.HiveDriver',
+    url='jdbc:hive2://kyuubi_server_ip:port',
+    user='user',
+    password='password',
+    dbtable='testdb.some_table'
+)
+
+-- query data
+SELECT * FROM kyuubi_table
+
+-- write data in overwrite mode
+INSERT OVERWRITE kyuubi_table SELECT ...
+
+-- write data in append mode
+INSERT INTO kyuubi_table SELECT ...
+```
 

Reply via email to