[ 
https://issues.apache.org/jira/browse/PHOENIX-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881037#comment-17881037
 ] 

ASF GitHub Bot commented on PHOENIX-6783:
-----------------------------------------

rejeb commented on code in PR #139:
URL: 
https://github.com/apache/phoenix-connectors/pull/139#discussion_r1755053409


##########
phoenix5-spark3/README.md:
##########
@@ -28,6 +28,18 @@ Apart from the shaded connector JAR, you also need to add 
the hbase mapredcp lib
 (add the exact paths as appropiate to your system)
 Both the `spark.driver.extraClassPath` and `spark.executor.extraClassPath` 
properties need to be set the above classpath. You may add them 
spark-defaults.conf, or specify them on the spark-shell or spark-submit command 
line.
 
+## Configuration properties
+
+| Name                      | Default | Usage | Description |
+| table                     | empty   |  R/W  | table name as 
`namespace.table_name` |
+| zrUrl                     | empty   |  R/W  | (Optional) List of zookeeper 
hosts. Deprecated, use `jdbcUrl` instead. Recommended not to set, value will be 
taken from hbase-site.xml |

Review Comment:
   Done.





> Phoenix5-Spark3 Connector Spark SQL Support
> -------------------------------------------
>
>                 Key: PHOENIX-6783
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6783
>             Project: Phoenix
>          Issue Type: Bug
>          Components: connectors, spark-connector
>         Environment: # Pure Spark 3.0.3
>  # phoenix5-spark3-shaded-6.0.0-SNAPSHOT.jar located in $SPARK_HOME/jars
>  # SQL used to create Spark Phoenix data source table is:
> {{create table Table1 (ID long, COL1 string) using  phoenix options (table 
> 'Table1', zkUrl '192.168.0.103:2181', primary 'ID')}}
>            Reporter: yj
>            Assignee: rejeb ben rejeb
>            Priority: Major
>
> While conducting the Spark 3.0 Integration test using *Phoenix5-spark3-shaded 
> module of project 
> [Phoenix-Connectors|https://github.com/apache/phoenix-connectors]* the 
> following questions occurred.
>  
> The test process is as follows:
> 1. Phoenix created a table called "TABLE1" and then loaded the data into that 
> table.
> 2. After that, I tried to select the data of the table using Spark.
>  
> As written in the README of Phoenix5-spark3 module, the part that loads 
> Phoenix data into the Spark DataFrame using the Phoenix data source seems to 
> work well.
> However, when I *create Phoenix Spark data source table through Spark SQL* 
> and try to select that data source table, the following error occurs:
> {{java.util.concurrent.ExecutionException: 
> org.apache.spark.sql.AnalysisException: phoenix is not a valid Spark SQL Data 
> Source}}
>  
> I wonder if the Phoenix-Connectors does not support creating Phoenix Spark 
> data source tables, or if there is any other reason for this error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to