[ 
https://issues.apache.org/jira/browse/PHOENIX-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15094118#comment-15094118
 ] 

Hudson commented on PHOENIX-2567:
---------------------------------

SUCCESS: Integrated in Phoenix-master #1065 (See 
[https://builds.apache.org/job/Phoenix-master/1065/])
PHOENIX-2567 phoenix-spark: DataFrame API should handle DATE columns (jmahonin: 
rev beb5c8b99323be33b7e15ddb933a7493d9d93e94)
* phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
* phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
* phoenix-spark/src/it/resources/setup.sql


> phoenix-spark: DataFrame API should handle 'DATE' columns
> ---------------------------------------------------------
>
>                 Key: PHOENIX-2567
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2567
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>            Reporter: Josh Mahonin
>            Assignee: Josh Mahonin
>             Fix For: 4.7.0
>
>         Attachments: PHOENIX-2567.patch
>
>
> The current implementation had the 'DATE' datatype bound to a Spark SQL 
> 'TimestampType', which causes a casting error trying to convert from 
> java.sql.Date to java.sql.Timestamp when using the DataFrame API with Phoenix 
> DATE columns.
> This patch modifies the schema handling to treat DATE columns as the Spark 
> 'DateType' instead. Note that Spark *drops* the hour, minute and second 
> values from these when interfacing using DataFrames. This follows the 
> java.sql.Date spec, but might not useful to folks who rely on the 
> hour/minute/second fields working using the DataFrame API and DATE columns. A 
> future improvement would be to force these to be TimestampTypes instead to 
> preserve information, but it's less intuitive and probably shouldn't be the 
> default behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to