[ 
https://issues.apache.org/jira/browse/FLINK-25459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qyw updated FLINK-25459:
------------------------
    Description: 
When I insert a row type value into sink, why do I need to maintain the field 
order in row?

This is the comparison between my query schema and sink schema:

Query schema: [ceshi: ROW<`name` STRING, `id` INT, `age` INT, `test` ROW<`c` 
STRING>>]
Sink schema:  [ceshi: ROW<`id` INT, `name` STRING, `age` INT, `test` ROW<`c` 
STRING>>] 

An error will be thrown:

Exception in thread "main" org.apache.flink.table.api.ValidationException: 
Column types of query result and sink for registered table 
'default_catalog.default_database.kafka_target' do not match.
Cause: Incompatible types for sink column 'ceshi' at position 0.

 

 

Is this phenomenon reasonable?

  was:
When I insert a row type value into sink, why do I need to maintain the field 
order in row?

This is the comparison between my query schema and sink schema:

Query schema: [ceshi: ROW<`name` STRING, `id` INT, `age` INT, `test` ROW<`c` 
STRING>>]
Sink schema:  [ceshi: ROW<`id` INT, `name` STRING, `age` INT, `test` ROW<`c` 
STRING>>] 

An error will be thrown:

Exception in thread "main" org.apache.flink.table.api.ValidationException: 
Column types of query result and sink for registered table 
'default_catalog.default_database.kafka_target' do not match.
Cause: Incompatible types for sink column 'ceshi' at position 0.


> When inserting row type fields into sink, the order needs to be maintained
> --------------------------------------------------------------------------
>
>                 Key: FLINK-25459
>                 URL: https://issues.apache.org/jira/browse/FLINK-25459
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table SQL / Planner
>    Affects Versions: 1.14.2
>            Reporter: qyw
>            Priority: Major
>
> When I insert a row type value into sink, why do I need to maintain the field 
> order in row?
> This is the comparison between my query schema and sink schema:
> Query schema: [ceshi: ROW<`name` STRING, `id` INT, `age` INT, `test` ROW<`c` 
> STRING>>]
> Sink schema:  [ceshi: ROW<`id` INT, `name` STRING, `age` INT, `test` ROW<`c` 
> STRING>>] 
> An error will be thrown:
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Column types of query result and sink for registered table 
> 'default_catalog.default_database.kafka_target' do not match.
> Cause: Incompatible types for sink column 'ceshi' at position 0.
>  
>  
> Is this phenomenon reasonable?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to