Filed an issue to track this problem. [1]

[1] https://issues.apache.org/jira/browse/FLINK-16693 
<https://issues.apache.org/jira/browse/FLINK-16693>

Best,
Paul Lam

> 在 2020年3月20日,17:17,Paul Lam <paullin3...@gmail.com> 写道:
> 
> Hi Jark,
> 
> Sorry for my late reply. 
> 
> Yes, I’m using the old planner. I’ve tried the blink planner, and it works 
> well.
> 
> We would like to switch to the blink planner, but we’ve developed some custom 
> features on the old planner, 
> so it would take some time to port the codes. So I might give a try to fix 
> the old planner if it’s not too involved. 
> 
> Best,
> Paul Lam
> 
>> 在 2020年3月19日,17:13,Jark Wu <imj...@gmail.com <mailto:imj...@gmail.com>> 写道:
>> 
>> Hi Paul,
>> 
>> Are you using old planner? Did you try blink planner? I guess it maybe a bug 
>> in old planner which doesn't work well on new types.
>> 
>> Best,
>> Jark
>> 
>> On Thu, 19 Mar 2020 at 16:27, Paul Lam <paullin3...@gmail.com 
>> <mailto:paullin3...@gmail.com>> wrote:
>> Hi,
>> 
>> Recently I upgraded a simple application that inserts static data into a 
>> table from 1.9.0 to 1.10.0, and 
>> encountered a timestamp type incompatibility problem during the table sink 
>> validation.
>> 
>> The SQL is like:
>> ```
>> insert into kafka.test.tbl_a # schema: (user_name STRING, user_id INT, 
>> login_time TIMESTAMP)
>> select ("ann", 1000, TIMESTAMP "2019-12-30 00:00:00")
>> ```
>> 
>> And the error thrown:
>> ```
>> Field types of query result and registered TableSink `kafka`.`test`.`tbl_a` 
>> do not match.
>>       Query result schema: [EXPR$0: String, EXPR$1: Integer, EXPR$2: 
>> Timestamp]
>>       TableSink schema:    [user_name: String, user_id: Integer, login_time: 
>> LocalDateTime]
>> ```
>> 
>> After some digging, I found the root cause might be that since FLINK-14645 
>> timestamp fields 
>> defined via TableFactory had been bridged to LocalDateTime, but timestamp 
>> literals are 
>> still backed by java.sql.Timestamp.
>> 
>> Is my reasoning correct? And is there any workaround? Thanks a lot!
>> 
>> Best,
>> Paul Lam
>> 
> 

Reply via email to