This is an automated email from the ASF dual-hosted git repository.
ajantha pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git
The following commit(s) were added to refs/heads/master by this push:
new 2129466 [DOC] Running the Thrift JDBC/ODBC server with
CarbonExtensions
2129466 is described below
commit 21294662c02253485ed6c2976e420cbd0ba58522
Author: QiangCai <[email protected]>
AuthorDate: Fri Jan 15 10:30:15 2021 +0800
[DOC] Running the Thrift JDBC/ODBC server with CarbonExtensions
Why is this PR needed?
since version 2.0, carbon supports starting spark ThriftServer with
CarbonExtensions.
What changes were proposed in this PR?
add the document to start spark ThriftServer with CarbonExtensions.
Does this PR introduce any user interface change?
No
Is any new testcase added?
No
This closes #4077
---
docs/quick-start-guide.md | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/docs/quick-start-guide.md b/docs/quick-start-guide.md
index be8eb2a..62f5f42 100644
--- a/docs/quick-start-guide.md
+++ b/docs/quick-start-guide.md
@@ -47,7 +47,7 @@ CarbonData can be integrated with Spark, Presto, Flink and
Hive execution engine
[Installing and Configuring CarbonData on Spark on YARN
Cluster](#installing-and-configuring-carbondata-on-spark-on-yarn-cluster)
-[Installing and Configuring CarbonData Thrift Server for Query
Execution](#query-execution-using-carbondata-thrift-server)
+[Installing and Configuring CarbonData Thrift Server for Query
Execution](#query-execution-using-the-thrift-server)
#### Presto
@@ -154,7 +154,7 @@ val carbon =
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(
`SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<carbon_store_path>",
"<local metastore path>")`.
- Data storage location can be specified by `<carbon_store_path>`, like
`/carbon/data/store`, `hdfs://localhost:9000/carbon/data/store` or
`s3a://carbon/data/store`.
-###### Option 2: Using SparkSession with CarbonExtensions
+###### Option 2: Using SparkSession with CarbonExtensions(since 2.0)
Start Spark shell by running the following command in the Spark directory:
@@ -325,9 +325,17 @@ mv carbondata.tar.gz carbonlib/
-## Query Execution Using CarbonData Thrift Server
+## Query Execution Using the Thrift Server
-### Starting CarbonData Thrift Server.
+### Option 1: Starting Thrift Server with CarbonExtensions(since 2.0)
+```
+cd $SPARK_HOME
+./sbin/start-thriftserver.sh \
+--conf spark.sql.extensions=org.apache.spark.sql.CarbonExtensions \
+$SPARK_HOME/carbonlib/apache-carbondata-xxx.jar
+```
+
+### Option 2: Starting CarbonData Thrift Server
a. cd `$SPARK_HOME`
@@ -391,11 +399,10 @@ $SPARK_HOME/carbonlib/apache-carbondata-xxx.jar
$SPARK_HOME/carbonlib/apache-carbondata-xxx.jar
```
-### Connecting to CarbonData Thrift Server Using Beeline.
+### Connecting to Thrift Server Using Beeline.
```
cd $SPARK_HOME
-./sbin/start-thriftserver.sh
./bin/beeline -u jdbc:hive2://<thriftserver_host>:port
Example