[ 
https://issues.apache.org/jira/browse/PHOENIX-2112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14625258#comment-14625258
 ] 

Josh Mahonin commented on PHOENIX-2112:
---------------------------------------

Can confirm with some quick testing that this version is backwards compatible 
with Spark 1.3.

For posterity, my test plan was to try the following code in spark-shell with a 
few scenarios:
{code}
import org.apache.phoenix.spark._
val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
"SOME_TABLE", "zkUrl" -> "some_host"))
df.filter(....).count()
{code}

1) Run spark-shell for Spark 1.4 without patch. Throws ColumnNotFoundException, 
which matches the bug report
2) Run spark-shell for Spark 1.4 with patch. Succeeds and returns expected count
3) Run spark-shell for Spark 1.3 without patch. Succeeds and returns expected 
count
4) Run spark-shell for Spark 1.3 with patch. Succeeds and returns expected count

> Phoenix-Spark need to support UTF8String for spark 1.4.0
> --------------------------------------------------------
>
>                 Key: PHOENIX-2112
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2112
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.4.0
>            Reporter: Yi Tian
>            Assignee: Josh Mahonin
>         Attachments: PHOENIX-2112-v2.patch
>
>
> In Spark 1.4.0, Phoenix-Spark will throw an exception when we put a filter 
> like *a='a'* , because phoenix did not recognized {{UTF8String}} as a String 
> Type, and then transform this expression to *a=a*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to