[
https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894690#comment-16894690
]
Jingsong Lee commented on FLINK-13438:
--------------------------------------
Hi [~lirui] , I think should add tests to DATE/TIME/TIMESTAMP type in Hive
source, sink and udx using blink-planner (and maybe using flink-planner too).
Hi [~TsReaper], I think you should explain which case and code will lead to
this bug in Jira to let the problem more understandable.
> Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support
> -------------------------------------------------------------
>
> Key: FLINK-13438
> URL: https://issues.apache.org/jira/browse/FLINK-13438
> Project: Flink
> Issue Type: Sub-task
> Components: Connectors / Hive
> Reporter: Caizhi Weng
> Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> Similar to JDBC connectors, Hive connectors communicate with Flink framework
> using TableSchema, which contains DataType. As the time data read from and
> write to Hive connectors must be java.sql.* types and the default conversion
> class of our time data types are java.time.*, we have to fix Hive connector
> with DataTypes.DATE/TIME/TIMESTAMP support.
> But currently when reading tables from Hive, the table schema is created
> using Hive's schema, so the time types in the created schema will be sql time
> type not local time type. If user specifies a local time type in the table
> schema when creating a table in Hive, he will get a different schema when
> reading it out. This is undesired.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)