[
https://issues.apache.org/jira/browse/PHOENIX-2290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15421571#comment-15421571
]
Hudson commented on PHOENIX-2290:
---------------------------------
SUCCESS: Integrated in Jenkins build Phoenix-master #1360 (See
[https://builds.apache.org/job/Phoenix-master/1360/])
PHOENIX-2236 PHOENIX-2290 PHOENIX-2547 Various phoenix-spark fixes (jmahonin:
rev 2afb16dc2032f2be9de220946e97f87336218e80)
* (edit)
phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
* (edit)
phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRelation.scala
* (edit) phoenix-spark/src/it/resources/setup.sql
> Spark Phoenix cannot recognize Phoenix view fields
> --------------------------------------------------
>
> Key: PHOENIX-2290
> URL: https://issues.apache.org/jira/browse/PHOENIX-2290
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.5.1
> Reporter: Fengdong Yu
> Assignee: Josh Mahonin
> Labels: spark
>
> I created base table in base shell:
> {code}
> create 'test_table', {NAME => 'cf1', VERSIONS => 1}
> put 'test_table', 'row_key_1', 'cf1:col_1', '200'
> {code}
> This is a very simple table. then create phoenix view in Phoenix shell.
> {code}
> create view "test_table" (pk varchar primary key, "cf1"."col_1" varchar)
> {code}
> then do following in Spark shell:
> {code}
> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" ->
> "\"test_table\"", "zkUrl" -> "localhost:2181"))
> df.registerTempTable("temp")
> {code}
> {code}
> scala> df.printSchema
> root
> |-- PK: string (nullable = true)
> |-- col_1: string (nullable = true)
> {code}
> sqlContext.sql("select * from temp") ------> {color:red} This does
> work{color}
> then:
> {code}
> sqlContext.sql("select * from temp where col_1='200' ")
> {code}
> {code}
> java.lang.RuntimeException:
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703):
> Undefined column. columnName=col_1
> at
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getQueryPlan(PhoenixInputFormat.java:125)
> at
> org.apache.phoenix.mapreduce.PhoenixInputFormat.getSplits(PhoenixInputFormat.java:80)
> at
> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
> at
> org.apache.phoenix.spark.PhoenixRDD.getPartitions(PhoenixRDD.scala:47)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
> at scala.Option.getOrElse(Option.scala:120)
> {code}
> {color:red}
> I also tried:
> {code}
> sqlContext.sql("select * from temp where \"col_1\"='200' ") --> EMPTY
> result, no exception
> {code}
> {code}
> sqlContext.sql("select * from temp where \"cf1\".\"col_1\"='200' ") -->
> exception, cannot recognize SQL
> {code}
> {color}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)