[ 
https://issues.apache.org/jira/browse/FLINK-24608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17441009#comment-17441009
 ] 

Timo Walther commented on FLINK-24608:
--------------------------------------

I would vote for option 1. In theory, when shortening CHAR/VARCHAR this could 
even affect the primary key (e.g. "asdf" and "aaa" could become "a" with 
VARCHAR(1)). It is safer to do that before the `SinkUpsertMaterializer` and 
users can enforce another keyBy and upsert materialize step to normalize the 
output. Having another operator in the pipeline seems to be unavoidable unless 
we only pass the {{rowtimeIndex}} directly to the sink and every sink 
implements custom logic for reading the timestamp.

> Sinks built with the unified sink framework do not receive timestamps when 
> used in Table API
> --------------------------------------------------------------------------------------------
>
>                 Key: FLINK-24608
>                 URL: https://issues.apache.org/jira/browse/FLINK-24608
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Common, Table SQL / Planner
>    Affects Versions: 1.14.0, 1.13.3, 1.15.0
>            Reporter: Fabian Paul
>            Assignee: Marios Trivyzas
>            Priority: Critical
>
> All sinks built with the unified sink framework extract the timestamp from 
> the internal {{StreamRecord}}. The Table API does not facilitate the 
> timestamp field in the {{StreamRecord}}  but extracts the timestamp from the 
> actual data. 
> We either have to use a dedicated operator before all the sinks to simulate 
> the behavior or allow a customizable timestamp extraction during the sink 
> translation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to