[ 
https://issues.apache.org/jira/browse/FLINK-25459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17465946#comment-17465946
 ] 

Wenlong Lyu commented on FLINK-25459:
-------------------------------------

[~qyw919867774] the matching logic is defined in SQL standard you could check 
more details in the SQL standard if you are interested in it: 
"If the <insert column list> is omitted, then an <insert column list> that 
identifies all columns of T in the ascending sequence of their ordinal 
positions within T is implicit."

> When inserting row type fields into sink, the order needs to be maintained
> --------------------------------------------------------------------------
>
>                 Key: FLINK-25459
>                 URL: https://issues.apache.org/jira/browse/FLINK-25459
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table SQL / Planner
>    Affects Versions: 1.14.2
>            Reporter: qyw
>            Priority: Major
>
> When I insert a row type value into sink, why do I need to maintain the field 
> order in row?
> This is the comparison between my query schema and sink schema:
> Query schema: [ceshi: ROW<`name` STRING, `id` INT, `age` INT, `test` ROW<`c` 
> STRING>>]
> Sink schema:  [ceshi: ROW<`id` INT, `name` STRING, `age` INT, `test` ROW<`c` 
> STRING>>] 
> An error will be thrown:
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Column types of query result and sink for registered table 
> 'default_catalog.default_database.kafka_target' do not match.
> Cause: Incompatible types for sink column 'ceshi' at position 0.
>  
>  
> Is this phenomenon reasonable?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to