[ 
https://issues.apache.org/jira/browse/PHOENIX-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881019#comment-17881019
 ] 

ASF GitHub Bot commented on PHOENIX-6783:
-----------------------------------------

rejeb commented on code in PR #139:
URL: 
https://github.com/apache/phoenix-connectors/pull/139#discussion_r1754930272


##########
phoenix5-spark/README.md:
##########
@@ -300,7 +363,7 @@ to executors as a comma-separated list against the key 
`phoenixConfigs` i.e (Pho
       .sqlContext
       .read
       .format("phoenix")
-      .options(Map("table" -> "Table1", "jdbcUrl" -> 
"jdbc:phoenix:phoenix-server:2181", "doNotMapColumnFamily" -> "true"))
+      .options(Map("table" -> "Table1", "jdbcUrl" -> 
"jdbc:phoenix:zkHost:zkport", "doNotMapColumnFamily" -> "true"))

Review Comment:
   Done





> Phoenix5-Spark3 Connector Spark SQL Support
> -------------------------------------------
>
>                 Key: PHOENIX-6783
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6783
>             Project: Phoenix
>          Issue Type: Bug
>          Components: connectors, spark-connector
>         Environment: # Pure Spark 3.0.3
>  # phoenix5-spark3-shaded-6.0.0-SNAPSHOT.jar located in $SPARK_HOME/jars
>  # SQL used to create Spark Phoenix data source table is:
> {{create table Table1 (ID long, COL1 string) using  phoenix options (table 
> 'Table1', zkUrl '192.168.0.103:2181', primary 'ID')}}
>            Reporter: yj
>            Assignee: rejeb ben rejeb
>            Priority: Major
>
> While conducting the Spark 3.0 Integration test using *Phoenix5-spark3-shaded 
> module of project 
> [Phoenix-Connectors|https://github.com/apache/phoenix-connectors]* the 
> following questions occurred.
>  
> The test process is as follows:
> 1. Phoenix created a table called "TABLE1" and then loaded the data into that 
> table.
> 2. After that, I tried to select the data of the table using Spark.
>  
> As written in the README of Phoenix5-spark3 module, the part that loads 
> Phoenix data into the Spark DataFrame using the Phoenix data source seems to 
> work well.
> However, when I *create Phoenix Spark data source table through Spark SQL* 
> and try to select that data source table, the following error occurs:
> {{java.util.concurrent.ExecutionException: 
> org.apache.spark.sql.AnalysisException: phoenix is not a valid Spark SQL Data 
> Source}}
>  
> I wonder if the Phoenix-Connectors does not support creating Phoenix Spark 
> data source tables, or if there is any other reason for this error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to