[ 
https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-13438.
------------------------------
    Resolution: Fixed

master: 2994f0e44b53c85535f2f29fb43d320ace91f6f8

> Support date type in Hive
> -------------------------
>
>                 Key: FLINK-13438
>                 URL: https://issues.apache.org/jira/browse/FLINK-13438
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Hive
>            Reporter: Caizhi Weng
>            Assignee: Rui Li
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.10.0
>
>         Attachments: 0001-hive.patch
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Similar to JDBC connectors, Hive connectors communicate with Flink framework 
> using TableSchema, which contains DataType. As the time data read from and 
> write to Hive connectors must be java.sql.* types and the default conversion 
> class of our time data types are java.time.*, we have to fix Hive connector 
> with DataTypes.DATE/TIME/TIMESTAMP support.
> But currently when reading tables from Hive, the table schema is created 
> using Hive's schema, so the time types in the created schema will be sql time 
> type not local time type. If user specifies a local time type in the table 
> schema when creating a table in Hive, he will get a different schema when 
> reading it out. This is undesired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to